GithubHelp home page GithubHelp logo

pureclipnerf's People

Contributors

hanhung avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pureclipnerf's Issues

Get a building error: subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

I tries to run the code on Windows, but got this error when building "render_utils_cuda", here is the full log message:

Using C:\Users\qwswe\AppData\Local\torch_extensions\torch_extensions\Cache\py38_cu113 as PyTorch extensions root...
Detected CUDA files, patching ldflags
Emitting ninja build file C:\Users\qwswe\AppData\Local\torch_extensions\torch_extensions\Cache\py38_cu113\adam_upd_cuda\build.ninja...
Building extension module adam_upd_cuda...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
ninja: no work to do.
Loading extension module adam_upd_cuda...
Using C:\Users\qwswe\AppData\Local\torch_extensions\torch_extensions\Cache\py38_cu113 as PyTorch extensions root...
Detected CUDA files, patching ldflags
Emitting ninja build file C:\Users\qwswe\AppData\Local\torch_extensions\torch_extensions\Cache\py38_cu113\render_utils_cuda\build.ninja...
Building extension module render_utils_cuda...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
[1/2] C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.2\bin\nvcc --generate-dependencies-with-compile --dependency-output render_utils_kernel.cuda.o.d -Xcudafe --diag_suppress=dll_interface_conflict_dllexport_assumed -Xcudafe --diag_suppress=dll_interface_conflict_none_assumed -Xcudafe --diag_suppress=field_without_dll_interface -Xcudafe --diag_suppress=base_class_has_different_dll_interface -Xcompiler /EHsc -Xcompiler /wd4190 -Xcompiler /wd4018 -Xcompiler /wd4275 -Xcompiler /wd4267 -Xcompiler /wd4244 -Xcompiler /wd4251 -Xcompiler /wd4819 -Xcompiler /MD -DTORCH_EXTENSION_NAME=render_utils_cuda -DTORCH_API_INCLUDE_EXTENSION_H -IC:\Users\qwswe\Anaconda3\envs\PureCLIPNeRF\lib\site-packages\torch\include -IC:\Users\qwswe\Anaconda3\envs\PureCLIPNeRF\lib\site-packages\torch\include\torch\csrc\api\include -IC:\Users\qwswe\Anaconda3\envs\PureCLIPNeRF\lib\site-packages\torch\include\TH -IC:\Users\qwswe\Anaconda3\envs\PureCLIPNeRF\lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.2\include" -IC:\Users\qwswe\Anaconda3\envs\PureCLIPNeRF\Include -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 -c D:\AI\PureCLIPNeRF\lib\cuda\render_utils_kernel.cu -o render_utils_kernel.cuda.o
FAILED: render_utils_kernel.cuda.o
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.2\bin\nvcc --generate-dependencies-with-compile --dependency-output render_utils_kernel.cuda.o.d -Xcudafe --diag_suppress=dll_interface_conflict_dllexport_assumed -Xcudafe --diag_suppress=dll_interface_conflict_none_assumed -Xcudafe --diag_suppress=field_without_dll_interface -Xcudafe --diag_suppress=base_class_has_different_dll_interface -Xcompiler /EHsc -Xcompiler /wd4190 -Xcompiler /wd4018 -Xcompiler /wd4275 -Xcompiler /wd4267 -Xcompiler /wd4244 -Xcompiler /wd4251 -Xcompiler /wd4819 -Xcompiler /MD -DTORCH_EXTENSION_NAME=render_utils_cuda -DTORCH_API_INCLUDE_EXTENSION_H -IC:\Users\qwswe\Anaconda3\envs\PureCLIPNeRF\lib\site-packages\torch\include -IC:\Users\qwswe\Anaconda3\envs\PureCLIPNeRF\lib\site-packages\torch\include\torch\csrc\api\include -IC:\Users\qwswe\Anaconda3\envs\PureCLIPNeRF\lib\site-packages\torch\include\TH -IC:\Users\qwswe\Anaconda3\envs\PureCLIPNeRF\lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.2\include" -IC:\Users\qwswe\Anaconda3\envs\PureCLIPNeRF\Include -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 -c D:\AI\PureCLIPNeRF\lib\cuda\render_utils_kernel.cu -o render_utils_kernel.cuda.o
C:/Users/qwswe/Anaconda3/envs/PureCLIPNeRF/lib/site-packages/torch/include\c10/macros/Macros.h(142): warning C4067: unexpected tokens following preprocessor directive - expected a newline
C:\Users\qwswe\Anaconda3\envs\PureCLIPNeRF\lib\site-packages\torch\include\pybind11\detail/common.h(108): warning C4005: 'HAVE_SNPRINTF': macro redefinition
C:\Users\qwswe\Anaconda3\envs\PureCLIPNeRF\Include\pyerrors.h(315): note: see previous definition of 'HAVE_SNPRINTF'
C:/Users/qwswe/Anaconda3/envs/PureCLIPNeRF/lib/site-packages/torch/include\c10/macros/Macros.h(142): warning C4067: unexpected tokens following preprocessor directive - expected a newline
C:\Users\qwswe\Anaconda3\envs\PureCLIPNeRF\lib\site-packages\torch\include\pybind11\detail/common.h(108): warning C4005: 'HAVE_SNPRINTF': macro redefinition
C:\Users\qwswe\Anaconda3\envs\PureCLIPNeRF\Include\pyerrors.h(315): note: see previous definition of 'HAVE_SNPRINTF'
D:/AI/PureCLIPNeRF/lib/cuda/render_utils_kernel.cu(47): error: calling a host function("__ceilf") from a global function("infer_n_samples_cuda_kernel ") is not allowed

D:/AI/PureCLIPNeRF/lib/cuda/render_utils_kernel.cu(47): error: identifier "__ceilf" is undefined in device code

D:/AI/PureCLIPNeRF/lib/cuda/render_utils_kernel.cu(312): error: calling a host function("__roundf") from a global function("maskcache_lookup_cuda_kernel ") is not allowed

D:/AI/PureCLIPNeRF/lib/cuda/render_utils_kernel.cu(312): error: identifier "__roundf" is undefined in device code

D:/AI/PureCLIPNeRF/lib/cuda/render_utils_kernel.cu(313): error: calling a host function("__roundf") from a global function("maskcache_lookup_cuda_kernel ") is not allowed

D:/AI/PureCLIPNeRF/lib/cuda/render_utils_kernel.cu(313): error: identifier "__roundf" is undefined in device code

D:/AI/PureCLIPNeRF/lib/cuda/render_utils_kernel.cu(314): error: calling a host function("__roundf") from a global function("maskcache_lookup_cuda_kernel ") is not allowed

D:/AI/PureCLIPNeRF/lib/cuda/render_utils_kernel.cu(314): error: identifier "__roundf" is undefined in device code

8 errors detected in the compilation of "D:/AI/PureCLIPNeRF/lib/cuda/render_utils_kernel.cu".
render_utils_kernel.cu
ninja: build stopped: subcommand failed.
Traceback (most recent call last):
File "C:\Users\qwswe\Anaconda3\envs\PureCLIPNeRF\lib\site-packages\torch\utils\cpp_extension.py", line 1808, in _run_ninja_build
subprocess.run(
File "C:\Users\qwswe\Anaconda3\envs\PureCLIPNeRF\lib\subprocess.py", line 516, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "run.py", line 23, in
import train_exp, train_imp
File "D:\AI\PureCLIPNeRF\train_exp.py", line 23, in
from lib import utils, dvgo_exp
File "D:\AI\PureCLIPNeRF\lib\dvgo_exp.py", line 18, in
render_utils_cuda = load(
File "C:\Users\qwswe\Anaconda3\envs\PureCLIPNeRF\lib\site-packages\torch\utils\cpp_extension.py", line 1202, in load
return _jit_compile(
File "C:\Users\qwswe\Anaconda3\envs\PureCLIPNeRF\lib\site-packages\torch\utils\cpp_extension.py", line 1425, in _jit_compile
_write_ninja_file_and_build_library(
File "C:\Users\qwswe\Anaconda3\envs\PureCLIPNeRF\lib\site-packages\torch\utils\cpp_extension.py", line 1537, in _write_ninja_file_and_build_library
_run_ninja_build(
File "C:\Users\qwswe\Anaconda3\envs\PureCLIPNeRF\lib\site-packages\torch\utils\cpp_extension.py", line 1824, in _run_ninja_build
raise RuntimeError(message) from e
RuntimeError: Error building extension 'render_utils_cuda'

Is this a pytorch compatible issue? I'm using pytorch 1.11

Getting .glb/.obj from training output

Hi, appreciate you all opensourcing this!
After running the sample training, I noticed that the outputs are rendered images. Is there a way to output the object to a mesh file like .glb/.obj?

Thanks again

Get torch._C._LinAlgError: cusolver error when running the explicit setting.

Hi, thanks for your fantastic work! When I tried running the explicit configuration by python run.py --config configs/low/exp_vit16.py --prompt "steampunk city; trending on artstation.", I got the following error:

Traceback (most recent call last):
  File "run.py", line 169, in <module>
    train(args, cfg, data_dict, jax_key, writer)
  File "/group/30042/jialexu/projects/PureCLIPNeRF/train_exp.py", line 585, in train
    scene_rep_reconstruction(
  File "/group/30042/jialexu/projects/PureCLIPNeRF/train_exp.py", line 410, in scene_rep_reconstruction
    rgb = trans_strategy(rgb)
  File "/data/miniconda3/envs/env-3.8.8/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
    return forward_call(*input, **kwargs)
  File "/data/miniconda3/envs/env-3.8.8/lib/python3.8/site-packages/torchvision/transforms/transforms.py", line 805, in forward
    return F.perspective(img, startpoints, endpoints, self.interpolation, fill)
  File "/data/miniconda3/envs/env-3.8.8/lib/python3.8/site-packages/torchvision/transforms/functional.py", line 676, in perspective
    coeffs = _get_perspective_coeffs(startpoints, endpoints)
  File "/data/miniconda3/envs/env-3.8.8/lib/python3.8/site-packages/torchvision/transforms/functional.py", line 636, in _get_perspective_coeffs
    res = torch.linalg.lstsq(a_matrix, b_matrix, driver="gels").solution
torch._C._LinAlgError: cusolver error: CUSOLVER_STATUS_EXECUTION_FAILED, when calling `cusolverDnXgeqrf( handle, params, m, n, CUDA_R_32F, reinterpret_cast<void*>(A), lda, CUDA_R_32F, reinterpret_cast<void*>(tau), CUDA_R_32F, reinterpret_cast<void*>(bufferOnDevice), workspaceInBytesOnDevice, reinterpret_cast<void*>(bufferOnHost), workspaceInBytesOnHost, info)`. This error may appear if the input matrix contains NaN.

It seems that the RandomPerspective transformation causes some bugs. I'm running on an A100 GPU with PyTorch 1.11.0 and CUDA 11.3, and the environment is set under your instruction. Do you have some suggestions to solve this problem? Disabling the persp_aug in the config file works fine, but I'm wondering how much this would influence the final results. Thanks a lot!

Get results different from the paper

Hi, I'm playing with the code and observing that some results are different from the original paper. For example:

"A bag full of trash sitting on a old park bench" (paper/exp_vit16.py, 15000 steps):
00003

"A bag full of trash sitting on a old park bench" (paper/imp_vit16.py, 15000 steps):
15001

while in the paper, the results are (Figure 6):
image

"American muscle car palms and moon synthwave." (paper/exp_vit16.py, 15000 steps):
15001

"American muscle car palms and moon synthwave." (paper/exp_imp16.py, 15000 steps):
003

while in the paper, the result is (Figure 1):
image

I did not change the config file nor the random seed (set as default 777). Do you know how this occurs? How can I obtain exactly the same results as in the paper?

Augmentations not in [0..1]?

Hi,

thanks for releasing this great project!

After applying the 3 augmentations, you then transform the image into the range suitable for CLIP encoding:

rgb_norm = norm_transform(rgb)

However rgb is not in the [0...1] range before applying the transformation, rather I get min/max values like these:

Min: -0.2289 Max: 1.0150
Min: -0.1300 Max: 2.3532

Is it intended to not clamp or normalize rgb back into [0...1] range before applying the clip transformation?
I got worse looking results when manually normalizing to [0...1] after the augmentations.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.