GithubHelp home page GithubHelp logo

onpix / llnerf Goto Github PK

View Code? Open in Web Editor NEW
78.0 3.0 8.0 204 KB

[ICCV2023] Lighting up NeRF via Unsupervised Decomposition and Enhancement

Home Page: https://whyy.site/paper/llnerf

License: MIT License

Python 98.26% Shell 1.74%
3d-reconstruction computer-vision iccv2023 low-light-image-enhancement nerf neural-radiance-field pytorch

llnerf's Introduction

Abstract: Neural Radiance Field (NeRF) is a promising approach for synthesizing novel views, given a set of images and the corresponding camera poses of a scene. However, images photographed from a low-light scene can hardly be used to train a NeRF model to produce high-quality results, due to their low pixel intensities, heavy noise, and color distortion. Combining existing low-light image enhancement methods with NeRF methods also does not work well due to the view inconsistency caused by the individual 2D enhancement process. In this paper, we propose a novel approach, called Low-Light NeRF (or LLNeRF), to enhance the scene representation and synthesize normal-light novel views directly from sRGB low-light images in an unsupervised manner. The core of our approach is a decomposition of radiance field learning, which allows us to enhance the illumination, reduce noise and correct the distorted colors jointly with the NeRF optimization process. Our method is able to produce novel view images with proper lighting and vivid colors and details, given a collection of camera-finished low dynamic range (8-bits/channel) images from a low-light scene. Experiments demonstrate that our method outperforms existing low-light enhancement methods and NeRF methods.

πŸ“» News

  • [2023/07/24] Β Β·Β  Update normal-light scenes data.
  • [2023/07/21] Β Β·Β  Code released ! πŸ”₯πŸ”₯πŸ”₯
  • [2023/07/17] Β Β·Β  Paper accepted by ICCV 2023.

⌨️ How to run

  1. βš™οΈ Setup the envuronment: We provide the exported conda yaml file environment.yml. Please make sure you installed conda and run:
conda env create -f environment.yml
conda activate llnerf

Note that this repo requires jax and flax. We use cuda 11.7 and cudnn 8.2. If you need to set up the Python environment with a different version of cuda+cudnn, we suggest you manually install jax, jaxlib, and flax to ensure compatibility with your cuda environment. Please refer to their official documentation for installation instructions. If you encounter any issues during the jax installation, please consult their official documentation for troubleshooting.

  1. πŸ“‚ Download the dataset: Our dataset is here. Please download and unzip it.

  2. πŸƒ Training: Pleaase modify scripts/train.sh first by replacing the dataset path and scene name with yours, and run bash scripts/train.sh.

  3. πŸŽ₯ Rendering: Pleaase modify scripts/render.sh first by replacing the dataset path and scene name with yours, and run bash scripts/render.sh.

Note: this version of the code has not undergone comprehensive testing and might contain minor issues. I will thoroughly test it and plan to release an updated version these days. If you encounter any problems, feel free to open a github issue.

πŸ”— Cite This Paper

@inproceedings{wang2023lighting,
  title={Lighting up NeRF via Unsupervised Decomposition and Enhancement},
  author={Haoyuan Wang and Xiaogang Xu and Ke Xu and Rynson W.H. Lau},
  booktitle={ICCV},
  year={2023}
}

πŸ˜ƒ Acknowledgments

This repository is based on jax and MultiNeRF - special thanks to their code!

llnerf's People

Contributors

onpix avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

llnerf's Issues

About smooth loss

Hi, thanks for your great work!

I am trying to replicate your work using pytorch framework. Following your form of code, I get preliminary results:
image

It is obvious that there is an error in obtaining the enhanced image part.
I initially found that my understanding was wrong in the smooth loss part.
In your code, you first define three types of rays to calculate the loss by taking the raw pixels, horizontal and vertical pixels. So in my understanding, the ray origin or ray direction dimension of each batch should be (batch, 3,3)?

However, in my pytorch code, I defined the ray origin or ray direction dimension of each batch as (batch, 3), so I ignored the middle dimension, that is, I didn't get the three types of rays, which made smooth loss invalid.

So how should I fix it? Simply add a grid of extra pixels to get three types of rays? However, in the input network, its dimension needs to be changed into two-dimensional (batch* 3,3) to pass through the linear layers, and finally the reshape into (batch, 3,3) to calculate the smooth loss. Can such a method successfully obtain enhanced images?

I will appreciate for your help!

About train my own datasets

Hi,
When I train my own datasets, and after render it, there will be mp4 and other png. However, the mp4 i got can't be open:
image

So i found the png, but it got wrong scenes:
rgb_enhanced_008
While the original and normal scene is :
image

So what is the problem?

Inquiry regarding dataset image resolutions

I want to express my gratitude for making the LLNeRF dataset available. However, I have encountered a query while working with the dataset. I observed that during the evaluation of scenes like still2, still3, and still4 in the paper, there appears to be a difference in resolution between normal light images and low light images. Could you kindly offer a comprehensive explanation of how the resolution disparity was managed during the preprocessing phase?

About pycolmap

When I run scripts/train.sh I met a problem about pycolmap
from pycolmap import SceneManager
ImportError: cannot import name 'SceneManager' from 'pycolmap' (/home/gtyssg/miniconda3/envs/jax/lib/python3.9/site-packages/pycolmap.cpython-39-x86_64-linux-gnu.so)

No such file or directory:'/llnerf-dataset/***/transforms.json'

When I try to run "bash train.sh",I firstly got the FileNotFoundError:"No such file or directory:'/LLNeRF/llnerf-dataset/book/transforms.json'."Then I tried all the datasets, but none own the 'transforms.json'. Could you help solve this problem? Thanks so much!

Comparison results in Table.1

Thanks for your nice work! It's a great work, I have a question is that in Table.1 there are comparison results with normal light scenes, but I download the dataset and it does not seem to have the normal-lit part, so where can I find the normal-light images? Thanks~

image

About train!

Hi
My GPU is V100 with 32GB memory. And I have only one gpu. But I found with your default setting, the training speed is very slow. For example, with the default setting(batch_size=1024.......), it takes about half an hour to finish 100 steps, so if i want to finish 10w steps,i need twenty days!
My question is why the speed is so slow? Can I change the batch_size to speed up? Or other variants need to change?Can you give me some suggestions?

AttributeError: module 'pycolmap' has no attribute 'SceneManager'

Hello! I want to consult a question about pycolmap. I pip install 'pycolmap' in my created environment at first. However, when I run "bash train.sh", I met this attribute error that module 'pycolmap' has no attribute 'SceneManager'. I notice that in 'datasets.py', there states a nerf-specific extension to the third party exists, but I can still not find the file 'scene_manager.py'. Can you provide some suggestions? Thank you very much.

About the camera pose ground truth

Hello, could you please share any suggestions about how to get the ground truth of the camere pose when you collecting the dataset? Cuz i did not find any instruction in your paper. Do you use colmap to register the low-light images in your dataset?

about train

I would like to ask how long it takes to train a model like a book

Out of memory

Hello, I have solved the problem about pycolmap, but when I run the code ”bash scripts/train.sh,β€œ I get a new error again, as follows
jaxlib.xla_extension.XlaRuntimeError: RESOURCE_EXHAUSTED: Out of memory allocating 20029400304 bytes.

Thank you so much for the reademe and code, if you can give me a little hint that would be really appreciated!

About the red background in rendering results

Hello, I found that when rendering using your dataset and the dataset I made myself, the final result will have a red blur. This layer of red looks good on your data set, but it shows a messy pink shade on my data set. I would like to ask if this red can be removed?

About code

image
Is the code above correspond to the density network?

And if i want to remove pos_enc,
can i do this?

x = lifted_means
inputs = x

About data supervision

Why is the enhancement process unsupervised in the paper, but a data supervision function: E [Ξ·(Μƒcr)βˆ’ Ξ·(cr)] is added, which makes the enhanced color approach the color of the low-light image? How can this achieve enhancement?

About environment!

I can't create the environment successfully:

Collecting ipython==8.4.0
  Downloading http://mirrors.aliyun.com/pypi/packages/fe/10/0a5925e6e8e4c948b195b4c776cae0d9d7bc6382008a0f7ed2d293bf1cfb/ipython-8.4.0-py3-none-any.whl (750 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 750.8/750.8 kB 7.7 MB/s eta 0:00:00
Collecting jax==0.3.17
  Downloading http://mirrors.aliyun.com/pypi/packages/87/74/950b7af8176499fdc3afea6352b4734325a1c735c026eeb3918b7e422b9a/jax-0.3.17.tar.gz (1.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.1/1.1 MB 29.6 MB/s eta 0:00:00
  Preparing metadata (setup.py): started
  Preparing metadata (setup.py): finished with status 'done'

Pip subprocess error:
ERROR: Could not find a version that satisfies the requirement jaxlib==0.3.15+cuda11.cudnn82 (from versions: 0.1.63, 0.1.74, 0.1.75, 0.1.76, 0.3.0, 0.3.2, 0.3.5, 0.3.7, 0.3.10, 0.3.14, 0.3.15, 0.3.20, 0.3.22, 0.3.24, 0.3.25, 0.4.0, 0.4.1, 0.4.2, 0.4.3, 0.4.4, 0.4.6, 0.4.7, 0.4.9, 0.4.10, 0.4.11, 0.4.12, 0.4.13, 0.4.14, 0.4.15)
ERROR: No matching distribution found for jaxlib==0.3.15+cuda11.cudnn82

failed

CondaEnvException: Pip failed

By the way, the cuda you said is 11.8,but in the environment.yaml,the cuda is 11.3,is it right?

Pair Data Problem.

Hi, dear author, thanks again for your work, I try to run your data on my own codebase now, but I confuse by 2 problems, I think maybe some place I have goes wrong:

(1). There are no paired data for the comparison (still2 scene for exampleοΌ‰:
The normal light images:
image
The low light images:
image

As it shows that DSC01652.png ... is not in the normal-light part.

(2). When I run the data on my own codebase (LLFF data format), I found that I could not render NeRF on the normal-light images, the generated images are like the following, I see your code is based on RAW-NeRF, when I run RAW-NeRF data on my code base, everything is OK.

So is this the problem with the camera pose or where did I go wrong? Thank you in advance!

WeChat51761edf6b7c60a8b723f6bb14bcd629

About Code RAW

you lose the function at internal/dataset.py -- 625col. if you choose the mode is RAW, you don't have self.image_paths in your code.

you should add:
colmap_files = sorted(utils.listdir(colmap_image_dir))
image_files = sorted(utils.listdir(image_dir))
colmap_to_image = dict(zip(colmap_files, image_files))
image_paths = [os.path.join(image_dir, colmap_to_image[f])
for f in image_names]
self.image_paths = np.array(image_paths)

after 625 col.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.