GithubHelp home page GithubHelp logo

alvaro-budria / neusacc Goto Github PK

View Code? Open in Web Editor NEW
54.0 3.0 0.0 315 KB

Neural implicit surface modelling. Fast, efficient implementation of NeuS with Instant-NGP's hash grid encoding, and CUDA-accelerated components.

License: MIT License

Python 100.00%

neusacc's Introduction

NeuSacc

An accelerated version of the SDF-based NeuS method for surface modeling, which can accurately render novel views and reconstruct the surface of 3D solid objects based on a sparse set of views.

Features

This implementation is mainly based on a few recent advances in neural rendering:

  • The original NeuS implementation.

  • The Hash Grid encoding from Instant-NGP, which has been shown to vastly reduce fitting times.

  • The CUDA-accelerated, fully fused MLPs and hash grid from tinycudann.

  • The CUDA-accelerated rendering and occupancy grid from Nerfacc.

Other interesting repositories and implementations are:

  • instant-nsr-pl: another accelerated implementation of NeuS with Pytorch Lightning. Very, very neat.

  • Instant-NSR: NeuS implementation using multiresolution hash encoding.

Running

The config in confs/wmask_DTUscan.conf is ready for scene 106 of the DTU scan dataset (which you can download here). You can reconstruct this scene in about 30 minutes, while the original NeuS implementation takes around 2 hours.

To fit a NeuS model on this scene, run

python exp_runner.py --mode train --conf ./confs/wmask_DTUscan.conf

You may need to slightly adjust the learning rate, the weight decay, as well as the size of the hash grid and MLPs, based on the amount of detail in each particular scene. In general, simpler, less detailed scenes will take an even shorter time, while objects containing more high frequency textures may take a bit longer to train.

Environments and Requirements

Clone this repository

git clone [email protected]:alvaro-budria/NeuSacc.git

then either

conda env create -f environment.yaml

or

pip install -r requirements

A GPU device having a compute capability of 75 (e.g. RTX 2080Ti) or higher is needed for running the tinycudann MLPs and hash grid encoding. To maximally exploit the efficiency of the hash grid encoding, an A10 or an RTX3090 are recommended, as they have bigger cache sizes.

Modelling a scene with custom COLMAP data

Please refer to this tutorial in the original NeuS repository for using custom data with COLMAP-generated camera poses.

TODO

  • Allow for multi-GPU training.
  • Include an additional component for modelling the background, as in Mip-NeRF360 or NeRF++.
  • Thoroughly benchmark this implementation of NeuS in terms of PSNR, Chamfer distance, and training time, comparing it with the original one.
  • Explore the effect that the number of frequency bands and size of the hash grid and MLPs have on the outcome reconstruction.

neusacc's People

Contributors

alvaro-budria avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

neusacc's Issues

confs/wandb_config.yaml is empty?

Thanks your previous reply. I find the config file 'wandb_config.yaml' is empty, but the following code is using the config file,
in the file "exp_runner.py",
133 config_dict = {"yaml": "confs/wandb_config.yaml"}
134 self.run = wandb.init(
135 project="NeuS-Enhanced", config=config_dict, save_code=True, dir=self.base_exp_dir,
136 )

This will cause an error, I am not sure. Look forward to your review. :)

RuntimeError: CUDA error: CUBLAS_STATUS_INVALID_VALUE when calling `cublasSgemm

Thanks your reply. when I run this code, there is a other error. The error is followings :


File "/home/wl/anaconda3/envs/NeuSacc/lib/python3.11/site-packages/torch/nn/modules/linear.py", line 114, in forward
return F.linear(input, self.weight, self.bias)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: CUDA error: CUBLAS_STATUS_INVALID_VALUE when calling cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)


My local gpu machine :
my gpu is nvidia rtx 3080

My nvcc version message is the followings :
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Thu_Feb_10_18:23:41_PST_2022
Cuda compilation tools, release 11.6, V11.6.112
Build cuda_11.6.r11.6/compiler.30978841_0

Looking forward to your reply. Thanks.

Result not good on custom data.

Hi, did you test the code on the custom data processed by neus tutorial? I cannot get right mesh using this project. Thanks very much.

faster train

Your work is great. Can you tell me how to set parameters to achieve significant acceleration? My own test is similar to the original neus speed

A quick question on input range of tcnn.Encoding

Hi, thanks for this awesome implementation! I have a small question on the range of inputs to the tcnn encoding and network module in the sdf network.

I think this implementation follows the convention in NeuS where the scene of interest is normalized within the bounding volume [-1, 1]^3. However, tiny-cuda-nn's encodings may all expect inputs within [0,1]^n by convention? And I haven't seen any squishing operation before feeding the input coordinates to the sdf network (just like what you do for the view-direction vectors in the color network).

Do we need to rescale the input range from [-1, 1] to [0, 1] before feeding into the tcnn encoding module?

Thanks in advance!

Use of alpha_fn in neus.py

Hello! Great work!
I'm a beginner in 3D reconstruction. I want to ask if using the predicted volume density alpha in the "neus.py" file to reduce sampling points is an effective operation to improve sample efficiency for thin objects and thus enhance accuracy, or does it just save computational efficiency?
Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.