GithubHelp home page GithubHelp logo

hustvl / 4dgaussians Goto Github PK

View Code? Open in Web Editor NEW
1.9K 38.0 152.0 66.2 MB

[CVPR 2024] 4D Gaussian Splatting for Real-Time Dynamic Scene Rendering

Home Page: https://guanjunwu.github.io/4dgs/

License: Other

Python 3.45% Shell 0.21% Jupyter Notebook 96.35%
3d computer-vision dynamic-scene gaussian-splatting graphics neural-network neural-rendering novel-view-synthesis radiance-field cvpr2024

4dgaussians's People

Contributors

catid avatar dli7319 avatar guanjunwu avatar jsxzs avatar taoranyi avatar xinggangw avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

4dgaussians's Issues

Severe artifacts in N3dv coffee scene

When I run the code using the command as follows:

export CUDA_VISIBLE_DEVICES=2
expname='coffee_martini'
port=6080
python train.py -s data/dynerf/${expname} --port ${port} --expname "dynerf/${expname}" --configs arguments/dynerf/default.py  &
wait
python render.py --model_path output/dynerf/${expname} --configs arguments/dynerf/default.py --skip_train --skip_test &
wait
python metrics.py --model_path output/dynerf/${expname}/  &
wait
echo "Done"

I found the rendered video has a lot of artifacts and the quality seems much lower than it is in the paper.

https://drive.google.com/file/d/14QJRuNAjHiPiHqyHM1aCiOfEXYWb2vP-/view?usp=sharing

About Rendering Quality

Hello, this is really a nice job.
But when I used 4D-gaussian to render the interp/aleks-teapot in the hypernerf dataset yesterday, I found that the rendering quality was poor. There were obvious mold penetrations in the hands and dynamic areas, and the modeling was insufficient, which was inconsistent with the results of the paper. What causes it?
3

1

About new code

Hello!I have been following this project for a long time. I tested your old code and your new code, and there is a clear difference in the effect. You seem to have changed the deformation network, the floaters are significantly reduced, and the background is much cleaner.
But in some details, the old version of the model seems to be better, such as the hands in the picture below:
left is old code,right is new code

image

The issue of rendering quality.

Thanks to the outstanding work by the authors.

During my experiments, I attempted various approaches for modeling the spatial changes in 3D-GS. Based on my experimental results, grid/plane-based methods appear to be more suitable for directly inputting a feature (128 or 256 dim) into an MLP as the encoding of time. Using such structures to learning delta for (position, rotation and scaling) seems to have a negative impact since the introduction of more learnable parameters.

I'm not sure if the authors have optimized the Grid/Plane structure for GS, and I look forward to the your response.

Question about training of different timestamp

Thank you for your great contribution to dynamic gs!

You know that 3d-gs needs a process of convergence

I didn't figure out that in your paper, do every S' of different timestamp needs a similar process of convergence? Or it just do some computing based on the previous S ?

Forgive me if the question is silly. Thanks again!

training using "cut_roasted_beef"

Hello! I followed your instructions to preprocess the format of the "cut_roasted_beef" dataset for training. However, during the training process, the PSNR in evaluating tests consistently stays around 16 to 17. Why is this happening? Additionally, in order to train on this data, I uncommented this part in dataset_readers.py. Is this the correct approach? I hope to receive your assistance.
image

Make dataset

How to make your own dynamic dataset like hyper data?

About the static training stage

Hi there,

Could you elaborate more about how you perform static training stage (the 'coarse' stage)? What kind of static scene do you choose to optimize in this stage?

I previously thought it should be the scene of the first frame that you model and then deform it in the fine stage, but I as I read the paper I start to be confused since you mention that it is especially effective for dataset like dnerf/dynerf which are in monocular setting with respect to individual frames during training. And there are no further details about this part.

It would be so great if you can provide more related details and insights :)

Thanks!

Custom training data has poor quality on hand

I started from images using the enerf-outdoor dataset ,used colmap to generate camera parameters and init point clouds. The background quality in the result is very good, but once the characters move there is a noticeable loss of quality.
look at face and hand
00106

Evaluation failed

Hi, thanks for you great job of 4D Gaussians. I have trained the lego model, but when I wanted to evaluate the model, it seemed to fail. The output information is:
(Gaussians4D) root@autodl-container-7bda11a2fa-37532aa1:~/autodl-tmp/4DGaussians# python metrics.py --model_path "output/dnerf/lego/"

Scene: output/dnerf/lego/
Method: ours_20000
Unable to compute metrics for model output/dnerf/lego/

What is the representation of the end result in this project

我在使用hook文件夹的内容试验后发现输出的结果是一个mp4,他是否可以生成一个比如3DGS项目中的splat文件可以导入到UE中,在我看到这个项目的第一个想法是可以通过该项目实现动态场景的复刻,比如一段日落的视频导入其中后可以输出一段连续保真正真实的日落场景,导入UE等软件中可以通过调整速率降低场景连续的搭建成本,是否可以实现这样的效果呢?如果可以的话请告诉我该怎么做,非常感谢!!!

fatal: no submodule mapping found in .gitmodules for path 'submodules/depth-diff-gaussian-rasterization'

Thanks for the excellent work!

But I encountered an error when attempting to pull this repo recursively.

$ git clone https://github.com/hustvl/4DGaussians.git --recursive
Cloning into '4DGaussians'...
remote: Enumerating objects: 2245, done.
remote: Counting objects: 100% (64/64), done.
remote: Compressing objects: 100% (55/55), done.
remote: Total 2245 (delta 21), reused 31 (delta 9), pack-reused 2181
Receiving objects: 100% (2245/2245), 52.10 MiB | 3.37 MiB/s, done.
Resolving deltas: 100% (1025/1025), done.
Checking connectivity... done.
fatal: no submodule mapping found in .gitmodules for path 'submodules/depth-diff-gaussian-rasterization'

YDJ民科一句话读论文: GaussianSplatting & DreamGaussian[Splatting] & Dynamic/4D GaussianSplatting

本人真的是民科,正规的教育就**西部一偏远农村上世纪初中毕业,所以瞎逼逼的,完全没有否定本文这种sota/newest paper先进性,反而是就当前很牛的算法在学习分析该方向的离想要的效果还有多远。(hust + hw的大神呢,就直接中文了哈。)

借宝地一用,请教大神们一个问题: 我理解,现在stereo generation/reconstuction中的3+1D中的time/dynamic,其实是一种近似于伪/弱动态。

GaussianSplat出来后,DreamGauss引入zero123的priori实现single-image; 本文4dGauss和Dynamic3DGauss,引入k-planes之类的做法实现所谓的动态。

我理解,现在的动态,都是多图输入的时候,对于统一对象多个姿态的这种动态,本质是简单的静态多图的一种扩展,是输入侧对已知同一对象的的多姿态的容忍/支持。

真正想要的动态,我个人理解: 应该是输入输出的“真”动态。
1)输入动态: 同一对象,不同的形态(不仅是姿态),能够适应和抓取“运动”:对于有超强约束特别精细机械运动约束比如钟表,对于有强约束的比如人体的运动,对于没有很强可表示约束的比如流动的水,都能够支持。 (现在demo数据集和论文,感觉更多是类似的人体这种中间约束强度的。)
2)输出动态:所谓输出动态,一是仅仅只是抓取到输入中所体现的运动,类似电影这种输出,现在还没有看到,隐式表示输出:
static mesh+ motion-driving-tensor等参数; 二是真的在充分的输入中,或者通过将来某些dynamic版本objvarse/zero123带来运动先验,能够抓取运动规律/约束,类似达到smpl(x)这种,既能得到静态的mesh(或者等价的隐式表示),又能得到合理的运动的约束,甚至语义上分离的约束表示(比如身体的部位/component,比如运动方式控制/stand-up,lift-leg(s),...) 比如smplx,有static template,然后可以对不同的部位,用不同的向量来驱动不同的运动类型和细节。

贴一个,我用纯AI,单图/人脸为主的全身,文本控制的动作,但multi-step pipeline, 很重,重构真人,中等复杂度,的例子(头发和衣服还在优化中)。(现在这种,在单图/priori,在动态上,还很远。)

motion_video_audio.mp4

How to Create custom dataSets for training?

Hello, I have gone through the entire issues and I have a lot of doubts about how to build my own dataset. I have successfully compiled NerfStudio, but it can only generate data in colmap format and cannot convert it to Blender format. Additionally, the Readme does not provide instructions on how to train a generic dataset. I would greatly appreciate it if you could provide me with a solution.

About opacity_final

Thanks for the excellent work.

I observe that the scale_final, mean3D_final, and rotation_final (in 4DGaussians/gaussian_renderer/init.py) are used for rendering, but the "opacity" is used instead of "opacity_final." And the "opacity_final" has no place to use.

I have no idea about this. Can you tell me why?

rendered_image, radii, depth = rasterizer(
    means3D = means3D_final,
    means2D = means2D,
    shs = shs,
    colors_precomp = colors_precomp,
    opacities = opacity,
    scales = scales_final,
    rotations = rotations_final,
    cov3D_precomp = cov3D_precomp)

(Line 117)

subprocess-exited-with-error x python setup.py develop TypeError: expected string or bytes-like object

When running pip install -e submodules/depth-diff-gaussian-rasterization , I keep getting an error message that looks like this:

(Gaussians4D) C:\Users\Manu\4DGaussians>pip install -e submodules/depth-diff-gaussian-rasterization
Obtaining file:///C:/Users/Manu/4DGaussians/submodules/depth-diff-gaussian-rasterization
Preparing metadata (setup.py) ... done
Installing collected packages: diff-gaussian-rasterization
Running setup.py develop for diff-gaussian-rasterization
error: subprocess-exited-with-error

× python setup.py develop did not run successfully.
│ exit code: 1
╰─> [60 lines of output]
    running develop
    running egg_info

...

writing manifest file 'diff_gaussian_rasterization.egg-info\SOURCES.txt'
running build_ext
No CUDA runtime is found, using CUDA_HOME='C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8'
C:\Users\Manu\anaconda3\envs\Gaussians4D\lib\site-packages\setuptools\command\easy_install.py:147: EasyInstallDeprecationWarning: easy_install command is deprecated. Use build and pip and other standards-based tools.
EasyInstallDeprecationWarning,
C:\Users\Manu\anaconda3\envs\Gaussians4D\lib\site-packages\setuptools\command\install.py:37: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
setuptools.SetuptoolsDeprecationWarning,
C:\Users\Manu\anaconda3\envs\Gaussians4D\lib\site-packages\torch\utils\cpp_extension.py:476: UserWarning: Attempted to use ninja as the BuildExtension backend but we could not find ninja.. Falling back to using the slow distutils backend.
warnings.warn(msg.format('we could not find ninja.'))
C:\Users\Manu\anaconda3\envs\Gaussians4D\lib\site-packages\torch\utils\cpp_extension.py:358: UserWarning: Error checking compiler version for cl: [WinError 2] Das System kann die angegebene Datei nicht finden
warnings.warn(f'Error checking compiler version for {compiler}: {error}')
Traceback (most recent call last):
File "", line 36, in
File "", line 34, in

...

File "C:\Users\Manu\anaconda3\envs\Gaussians4D\lib\site-packages\pkg_resources_vendor\packaging\version.py", line 264, in init
match = self._regex.search(version)
TypeError: expected string or bytes-like object
[end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.

error: subprocess-exited-with-error

I tried several things like

  • installing cuda 11.7 instead of 11.8
  • create a CUDA_HOME system variable to the path where cuda is installed
  • manually install ninja via > pip install ninja
  • checked if my versions of pytorch and cuda are compatible
  • run setup.py via cd submodules/depth-diff-gaussian-rasterization
    python setup.py develop
  • update the setuptools via pip install --upgrade setuptools

and some other things but nothing worked.
Some actions changed the number of lines of output. In the beginning it was 58 lines of output, then 56, then 78, now 60.

I deleted the 4DGaussians folder, started from scratch but nothing changed.
Earlier today, I trained a normal 3D GS and everything worked just fine.

I'm working with

  • Win 10
  • RTX 4090
  • conda 23.7.4
  • Cuda compilation tools, release 11.8, V11.8.89
    Build cuda_11.8.r11.8/compiler.31833905_0
  • torch Version: 1.13.1

Please let me know if there is any way to fix this, I don't know what else I could try!
Thanks to the devs for this amazing project, hope it's possible to fix this issue and start working with it!

process to create pretrained dataset For real dynamic scenes (HyperNeRF )

Actually I checkout sample pretrained data-set provided.I trained, rendered and evaluate that data and get get output video also.
now i want create my own pretrained data-set from scratch for this please guide someone step by step process for generation of pretrained data-set.

please provide information indetails so that new person can understand easily and implement.

How to output the dense point cloud or high quality mesh

Hi, I want to export the dense point cloud or high-quality mesh.

I tried to export mesh, but the accuracy is very poor.

Point cloud I started directly from the output point cloud, and then SH2RGB to the normal point cloud. However, I found that the point cloud derived in this way is far inferior to traditional 3D reconstruction methods, such as dense reconstruction colmap. (1) I increased the number of Gaussian during training by changing densify_from_iter, but the resulting point cloud quality was still not good (in the case that the number of point clouds was much larger than the point clouds generated by dense reconstruction), I think simply increasing the number of Gaussian is not a good method. (2) I replaced the initial random points3D.ply with the point cloud generated by my dense reconstruction. I found that because of the nature of Gaussian, the number of meaningful points has been decreasing, and then a lot of noise points are added, so the point cloud obtained after training is still not as dense reconstruction.

I would like to know if there is any idea to export a high-quality point cloud or mesh. This may not be the focus of this paper, but it could also be important.

Here is a Chinese translation of my issue:

Hi 你好,我这里想导出dense的point cloud或者high-quality的mesh

我尝试导出了mesh,但精度非常差。

点云我一开始是直接从output的点云,再SH2RGB转成正常点云。但我发现这样导出的点云远不如传统三维重建的方法,比如dense colmap中的reconstruction。(1) 我通过改变densify_from_iter来增加训练时高斯的数量,但最后导出的点云质量依然不如 (在点云数量远大于dense reconstruction产生的点云的情况下), 我觉得单纯增加高斯的数量不是什么好方法。 (2) 我把最开始随机产生的points3D.ply 改成了我dense reconstruction产生的点云。我发现因为高斯的性质,点的数量一直在减少,然后多了很多noise,这样训练之后得到的点云依然不如dense reconstruction. 我想知道有没有什么思路可以导出high-quality的点云 or mesh

Data value requirements of the custom dataset

Thanks for your great work.

I'm trying your methods on a customized dataset obtained from stereoscopy rather than colmap or synthetic softwares. When preparing it for your code, a problem happens that HexPlane can not interpolate a value and returns zero vector.

So I'm wondering if there is specific requirements of the input data range, like the poses should be centered or the initialized point cloud should be normalized or something else? Thanks.

Error when run comlap.sh on dynerf data

the problem happen after image_undistorter when doing patch_match_stereo

==============================================================================
Processing view 1 / 21 for r_012.png

Reading inputs...
F1212 15:39:45.322880 972573 image.cc:58] Check failed: width_ == bitmap_.Width() (1352 vs. 2704)
*** Check failure stack trace: ***
@ 0x7f52e2ddb1c3 google::LogMessage::Fail()
@ 0x7f52e2de025b google::LogMessage::SendToLog()
@ 0x7f52e2ddaebf google::LogMessage::Flush()
@ 0x7f52e2ddb6ef google::LogMessageFatal::~LogMessageFatal()
@ 0x562ac73a7fec colmap::mvs::Image::SetBitmap()
@ 0x562ac70f9ff8 colmap::mvs::PatchMatchController::ProcessProblem()
@ 0x562ac70fcec0 _ZNSt17_Function_handlerIFSt10unique_ptrINSt13__future_base12_Result_baseENS2_8_DeleterEEvENS1_12_Task_setterIS0_INS1_7_ResultIvEES3_EZNS1_11_Task_stateISt5_BindIFMN6colmap3mvs20PatchMatchControllerEFvRKNSD_17PatchMatchOptionsEmEPSE_SF_mEESaIiEFvvEE6_M_runEvEUlvE_vEEE9_M_invokeERKSt9_Any_data
@ 0x562ac6fc2dbd std::__future_base::_State_baseV2::_M_do_set()
@ 0x7f52e2dbc47f __pthread_once_slow
@ 0x562ac70fd681 _ZNSt17_Function_handlerIFvvEZN6colmap10ThreadPool7AddTaskIMNS1_3mvs20PatchMatchControllerEFvRKNS4_17PatchMatchOptionsEmEJPS5_RS6_RmEEESt6futureINSt9result_ofIFT_DpT0_EE4typeEEOSG_DpOSH_EUlvE_E9_M_invokeERKSt9_Any_data
@ 0x562ac70f2f8a colmap::ThreadPool::WorkerFunc()
@ 0x7f52cd3edaa3 execute_native_thread_routine
@ 0x7f52e2db3609 start_thread
@ 0x7f52cd0e3163 clone
colmap.sh: line 22: 972568 Aborted

hust的大神,要不要来个: Dream Dymamic Gauss?

感觉现在没数据。

shape:
zero123(xl)只能弱弱的带点shape的priori过来。多view都还差很远。

dynamic:
还没有一个更大的数据集和相关的表示: 各种object,各种pose/action/motion的数据集。
类似frame的表示感觉不行,要么有个chatgpt至少百倍大小的模型来隐式, 要么就是template + drive-vector显示的驱动。

Add disclaimer that clarifies scope

Dear Fellows,
If I am right with the below, it would have been saved me hours if a Mac non-compatibility disclaimer was shown near the installation steps, so I propose it:

How to output the per-frame point cloud or mesh

Thanks for the great work.

I now want to export the contents of each frame for editing. The output of Siggraph 2023 3D Gaussian is a.ply file, and I can easily get the corresponding point cloud and mesh. For 4D Gaussian, I want to know how to get the deformable position of 3D Gaussian in each frame so that I can output the per-frame point cloud or mesh.

Any possible advice or practice would be very helpful to me. Thank you sincerely.

Failure on custom data either dynamic or static

Hi there,

Thanks for the great work!!

I've been trying to use my own custom data generated from Blender using animation from mixamo and sampling camera viewpoints. Input image frames are like this :
frame_0001
I applied the arguments of dnerf dataset directly, and try best to match the setting of dnerf dataset: 800x800 resolution for each frames; sampling sparse camera views for trainset, testset and valset.

ModelHiddenParams = dict(
    kplanes_config = {
     'grid_dimensions': 2,
     'input_coordinate_dim': 4,
     'output_coordinate_dim': 32,
     'resolution': [64, 64, 64, 100] #[64, 64, 64, 50]
    }
)
OptimizationParams = dict(
    coarse_iterations = 3000,
    deformation_lr_init = 0.00016,
    deformation_lr_final = 0.0000016,
    deformation_lr_delay_mult = 0.01,
    grid_lr_init = 0.0016,
    grid_lr_final = 0.000016,
    iterations = 20000,
    pruning_interval = 8000,
    percent_dense = 0.01,
    # opacity_reset_interval=30000
)
ModelHiddenParams = dict(
    multires = [1, 2, 4, 8 ],
    defor_depth = 0,
    net_width = 64,
    plane_tv_weight = 0,
    time_smoothness_weight = 0,
    l1_time_planes =  0,
    weight_decay_iteration=0,
    bounds=1.6
)

But the thing is that during training, the PSNR and L1 Loss stays the same while the densification goes on, either at coarse stage or fine stage.

image
image

And the rendered video is not working well:

video_rgb_all.mp4

I also try fitting only the first frame, and it still got the same issue. The rendered video:

video_rgb.mp4

Do you know what might cause the issue?

Any chance to view it in a real-time viewer?

Hi, thank you for your nice work! I've tried the DyNeRF dataset and the rendered video looks nice. But is there a way to view the dynamic scene interactively by using viewers like SIBR or possibly any other viewers?

Custom trained data is coming out as a mess

First of all, great work. I am very impressed by the results you have been able to achieve.

However I have not been able to get my own data sets working well and thought maybe you'd be able to point me in the right direction or tell me about any limitations I should be aware about.

To put it short, using the vanilla Gaussian Splatting i am able to get a solid non-dynamic scene to output using this data so I'm trying to figure out what gives.

I've modeled my input data to match that the dynerf dataset uses. To achieve this I took my between 9-16 synced cameras video files and dumped each frame out.

Then to calibrate I simply took the first frame output from each camera and put that into it's own folder calibration/images. From there I followed the LLFF imgs2poses.py and had it get a good calibration from both my datasets I've tried using (I verified the # of cameras matched and the positions looked accurate using the colmap Gui). I let this script output the .npy file and put that in the root of my capture data's folder so it matched the structure the dynerf datasets use.

I then created a .py file matching the default dynerf .py file (I wonder if there is anything special i that needs to be done here?)

From there I was able to run it in the same way I ran using your sample dynerf dataset but the resulting video was basically just noise for one data set that was from 16 cameras circling a target, and quite noisy from the 9 more linear/semi curve camera position data set I have created.

I've noticed you are quite active working on this project so thought maybe i'd reach out and see if you have any idea what i may be doing wrong?

Should i be trying to get my input data to match one of the other supported formatted dataset you've tested with? If so can you provide detailed instructions on how to take several synced camera videos (or just the dumped image sequences from each of them) and get to come out better using this awesome codebase you've written?

Any help would be much appreciated. I look forward to hearing back!

Tiny Bug: function parameter names are not aligned

In train.py:

if stage == "fine" and hyper.time_smoothness_weight != 0:
            # tv_loss = 0
            tv_loss = gaussians.compute_regulation(hyper.time_smoothness_weight, hyper.plane_tv_weight, hyper.l1_time_planes)
            loss += tv_loss

However, in gaussian_model.py:

    def compute_regulation(self, time_smoothness_weight, l1_time_planes_weight, plane_tv_weight):
        return plane_tv_weight * self._plane_regulation() + time_smoothness_weight * self._time_regulation() + l1_time_planes_weight * self._l1_regulation()

It makes little difference though hhh.

colmap.sh for hypernerf dataset

Thank you for sharing your nice work! I have tried
zsh colmap.sh ../data/hypernerf/vrig/vrig-chicken hypernerf
to generate SFM dense points. In this part however, isn't filtering dynamic part using binary mask necessary in colmap? There is an option like --Image_Reader.mask_path in colmap feature_extractor. The implementation of above command gives abort error. And no points are triangulated as:
=> Image sees 0 / 0 points
=> Triangulated 0 points

Missing submodule

Hi, great work! I'm having trouble installing the submodule even I cloned with --recursive. Please help!

installation problem on "pip install -e submodules/depth-diff-gaussian-rasterization"

I already installed cudatoolkit=11.6 and pytorch=1.13.1
Can you please help me to make it work?

(Gaussians4D) Y:\AI\4DGaussians>pip install -e submodules/depth-diff-gaussian-rasterization
Obtaining file:///Y:/AI/4DGaussians/submodules/depth-diff-gaussian-rasterization
Preparing metadata (setup.py) ... done
Installing collected packages: diff-gaussian-rasterization
Running setup.py develop for diff-gaussian-rasterization
error: subprocess-exited-with-error

× python setup.py develop did not run successfully.
│ exit code: 1
╰─> [58 lines of output]
    running develop
    running egg_info
    writing diff_gaussian_rasterization.egg-info\PKG-INFO
    writing dependency_links to diff_gaussian_rasterization.egg-info\dependency_links.txt
    writing top-level names to diff_gaussian_rasterization.egg-info\top_level.txt
    reading manifest file 'diff_gaussian_rasterization.egg-info\SOURCES.txt'
    adding license file 'LICENSE.md'
    writing manifest file 'diff_gaussian_rasterization.egg-info\SOURCES.txt'
    running build_ext
    No CUDA runtime is found, using CUDA_HOME='C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.7'
    Y:\AI\Anaconda3\envs\Gaussians4D\lib\site-packages\setuptools\command\easy_install.py:147: EasyInstallDeprecationWarning: easy_install command is deprecated. Use build and pip and other standards-based tools.
      EasyInstallDeprecationWarning,
    Y:\AI\Anaconda3\envs\Gaussians4D\lib\site-packages\setuptools\command\install.py:37: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
      setuptools.SetuptoolsDeprecationWarning,
    Y:\AI\Anaconda3\envs\Gaussians4D\lib\site-packages\torch\utils\cpp_extension.py:476: UserWarning: Attempted to use ninja as the BuildExtension backend but we could not find ninja.. Falling back to using the slow distutils backend.
      warnings.warn(msg.format('we could not find ninja.'))
    Y:\AI\Anaconda3\envs\Gaussians4D\lib\site-packages\torch\utils\cpp_extension.py:358: UserWarning: Error checking compiler version for cl: [WinError 2] El sistema no puede encontrar el archivo especificado
      warnings.warn(f'Error checking compiler version for {compiler}: {error}')
    Traceback (most recent call last):
      File "<string>", line 36, in <module>
      File "<pip-setuptools-caller>", line 34, in <module>
      File "Y:\AI\4DGaussians\submodules\depth-diff-gaussian-rasterization\setup.py", line 32, in <module>
        'build_ext': BuildExtension
      File "Y:\AI\Anaconda3\envs\Gaussians4D\lib\site-packages\setuptools\__init__.py", line 87, in setup
        return distutils.core.setup(**attrs)
      File "Y:\AI\Anaconda3\envs\Gaussians4D\lib\site-packages\setuptools\_distutils\core.py", line 185, in setup
        return run_commands(dist)
      File "Y:\AI\Anaconda3\envs\Gaussians4D\lib\site-packages\setuptools\_distutils\core.py", line 201, in run_commands
        dist.run_commands()
      File "Y:\AI\Anaconda3\envs\Gaussians4D\lib\site-packages\setuptools\_distutils\dist.py", line 969, in run_commands
        self.run_command(cmd)
      File "Y:\AI\Anaconda3\envs\Gaussians4D\lib\site-packages\setuptools\dist.py", line 1208, in run_command
        super().run_command(command)
      File "Y:\AI\Anaconda3\envs\Gaussians4D\lib\site-packages\setuptools\_distutils\dist.py", line 988, in run_command
        cmd_obj.run()
      File "Y:\AI\Anaconda3\envs\Gaussians4D\lib\site-packages\setuptools\command\develop.py", line 34, in run
        self.install_for_development()
      File "Y:\AI\Anaconda3\envs\Gaussians4D\lib\site-packages\setuptools\command\develop.py", line 114, in install_for_development
        self.run_command('build_ext')
      File "Y:\AI\Anaconda3\envs\Gaussians4D\lib\site-packages\setuptools\_distutils\cmd.py", line 318, in run_command
        self.distribution.run_command(command)
      File "Y:\AI\Anaconda3\envs\Gaussians4D\lib\site-packages\setuptools\dist.py", line 1208, in run_command
        super().run_command(command)
      File "Y:\AI\Anaconda3\envs\Gaussians4D\lib\site-packages\setuptools\_distutils\dist.py", line 988, in run_command
        cmd_obj.run()
      File "Y:\AI\Anaconda3\envs\Gaussians4D\lib\site-packages\setuptools\command\build_ext.py", line 84, in run
        _build_ext.run(self)
      File "Y:\AI\Anaconda3\envs\Gaussians4D\lib\site-packages\setuptools\_distutils\command\build_ext.py", line 346, in run
        self.build_extensions()
      File "Y:\AI\Anaconda3\envs\Gaussians4D\lib\site-packages\torch\utils\cpp_extension.py", line 499, in build_extensions
        _check_cuda_version(compiler_name, compiler_version)
      File "Y:\AI\Anaconda3\envs\Gaussians4D\lib\site-packages\torch\utils\cpp_extension.py", line 382, in _check_cuda_version
        torch_cuda_version = packaging.version.parse(torch.version.cuda)
      File "Y:\AI\Anaconda3\envs\Gaussians4D\lib\site-packages\pkg_resources\_vendor\packaging\version.py", line 49, in parse
        return Version(version)
      File "Y:\AI\Anaconda3\envs\Gaussians4D\lib\site-packages\pkg_resources\_vendor\packaging\version.py", line 264, in __init__
        match = self._regex.search(version)
    TypeError: expected string or bytes-like object
    [end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.

error: subprocess-exited-with-error

× python setup.py develop did not run successfully.
│ exit code: 1
╰─> [58 lines of output]
running develop
running egg_info
writing diff_gaussian_rasterization.egg-info\PKG-INFO
writing dependency_links to diff_gaussian_rasterization.egg-info\dependency_links.txt
writing top-level names to diff_gaussian_rasterization.egg-info\top_level.txt
reading manifest file 'diff_gaussian_rasterization.egg-info\SOURCES.txt'
adding license file 'LICENSE.md'
writing manifest file 'diff_gaussian_rasterization.egg-info\SOURCES.txt'
running build_ext
No CUDA runtime is found, using CUDA_HOME='C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.7'
Y:\AI\Anaconda3\envs\Gaussians4D\lib\site-packages\setuptools\command\easy_install.py:147: EasyInstallDeprecationWarning: easy_install command is deprecated. Use build and pip and other standards-based tools.
EasyInstallDeprecationWarning,
Y:\AI\Anaconda3\envs\Gaussians4D\lib\site-packages\setuptools\command\install.py:37: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
setuptools.SetuptoolsDeprecationWarning,
Y:\AI\Anaconda3\envs\Gaussians4D\lib\site-packages\torch\utils\cpp_extension.py:476: UserWarning: Attempted to use ninja as the BuildExtension backend but we could not find ninja.. Falling back to using the slow distutils backend.
warnings.warn(msg.format('we could not find ninja.'))
Y:\AI\Anaconda3\envs\Gaussians4D\lib\site-packages\torch\utils\cpp_extension.py:358: UserWarning: Error checking compiler version for cl: [WinError 2] El sistema no puede encontrar el archivo especificado
warnings.warn(f'Error checking compiler version for {compiler}: {error}')
Traceback (most recent call last):
File "", line 36, in
File "", line 34, in
File "Y:\AI\4DGaussians\submodules\depth-diff-gaussian-rasterization\setup.py", line 32, in
'build_ext': BuildExtension
File "Y:\AI\Anaconda3\envs\Gaussians4D\lib\site-packages\setuptools_init_.py", line 87, in setup
return distutils.core.setup(**attrs)
File "Y:\AI\Anaconda3\envs\Gaussians4D\lib\site-packages\setuptools_distutils\core.py", line 185, in setup
return run_commands(dist)
File "Y:\AI\Anaconda3\envs\Gaussians4D\lib\site-packages\setuptools_distutils\core.py", line 201, in run_commands
dist.run_commands()
File "Y:\AI\Anaconda3\envs\Gaussians4D\lib\site-packages\setuptools_distutils\dist.py", line 969, in run_commands
self.run_command(cmd)
File "Y:\AI\Anaconda3\envs\Gaussians4D\lib\site-packages\setuptools\dist.py", line 1208, in run_command
super().run_command(command)
File "Y:\AI\Anaconda3\envs\Gaussians4D\lib\site-packages\setuptools_distutils\dist.py", line 988, in run_command
cmd_obj.run()
File "Y:\AI\Anaconda3\envs\Gaussians4D\lib\site-packages\setuptools\command\develop.py", line 34, in run
self.install_for_development()
File "Y:\AI\Anaconda3\envs\Gaussians4D\lib\site-packages\setuptools\command\develop.py", line 114, in install_for_development
self.run_command('build_ext')
File "Y:\AI\Anaconda3\envs\Gaussians4D\lib\site-packages\setuptools_distutils\cmd.py", line 318, in run_command
self.distribution.run_command(command)
File "Y:\AI\Anaconda3\envs\Gaussians4D\lib\site-packages\setuptools\dist.py", line 1208, in run_command
super().run_command(command)
File "Y:\AI\Anaconda3\envs\Gaussians4D\lib\site-packages\setuptools_distutils\dist.py", line 988, in run_command
cmd_obj.run()
File "Y:\AI\Anaconda3\envs\Gaussians4D\lib\site-packages\setuptools\command\build_ext.py", line 84, in run
_build_ext.run(self)
File "Y:\AI\Anaconda3\envs\Gaussians4D\lib\site-packages\setuptools_distutils\command\build_ext.py", line 346, in run
self.build_extensions()
File "Y:\AI\Anaconda3\envs\Gaussians4D\lib\site-packages\torch\utils\cpp_extension.py", line 499, in build_extensions
_check_cuda_version(compiler_name, compiler_version)
File "Y:\AI\Anaconda3\envs\Gaussians4D\lib\site-packages\torch\utils\cpp_extension.py", line 382, in _check_cuda_version
torch_cuda_version = packaging.version.parse(torch.version.cuda)
File "Y:\AI\Anaconda3\envs\Gaussians4D\lib\site-packages\pkg_resources_vendor\packaging\version.py", line 49, in parse
return Version(version)
File "Y:\AI\Anaconda3\envs\Gaussians4D\lib\site-packages\pkg_resources_vendor\packaging\version.py", line 264, in init
match = self._regex.search(version)
TypeError: expected string or bytes-like object
[end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.

About 2 FPS

Noticing that there are 2 FPS printed(test and vedio) while rendering and they have a large difference in numerical data,would you mind telling me what causes such difference?

Rendering Quality of hypernerf split-cookie scene

hi, thanks for this great research paper.
While I was playing with the code, I found multiple cases where the results did not match the results shown in the paper and the project page.
As the project page shows a good result on hypernerf cookie-split dataset, these are the rendering quality after training with the script files in the repo.

image

image

I was wondering if there are any more details on this specific scene as the results are not reproducible.

Quality much lower than on project page

Hi,
thank you very much for your code and your research. I ran your code on the chickenchicken dataset.
I used the follow (similar what you provide in the Readme) command for training:

python train.py -s data/hypernerf/interp/chickchicken --port 6017 --expname "hypternerf/interp/chickenchicken" --configs arguments/hypernerf/default.py

Took me approx. 1h to train on the dataset. Not sure, but the quality appears much worse to what you are presenting on the project page. I have also trained the synthetic bouncing ball dataset which also appears to me worse than what you show. Do I need to change something with respect to training parameters?

Here is the link to my result: https://youtube.com/shorts/hipsFRtWAz8?si=iF4OCXOVt2AZN0et

Btw, is there a way to view the dynamic scene interactively? For the the bouncingballs scene I can run SIBR viewer. However, I only have a static scene. For the chickenchicken Scene it crashes at SIBR viewer startup.

Thank you,
janusch

Error when Training Hypernerf Dataset

I have encountered an issue while attempting to train the hypernerf dataset. Here are the details:

I have successfully executed the following command:

python train.py -s data/dnerf/bouncingballs --port 6017 --expname "dnerf/bouncingballs" --configs arguments/dnerf/bouncingballs.py

However, when I tried to train the hypernerf dataset with the following command:

python train.py -s data/hypernerf/vrig/broom2 --port 6017 --expname "hypernerf/broom2" --config arguments/hypernerf/broom2.py

I encountered the following error:

Traceback (most recent call last):
  File "train.py", line 410, in <module>
    training(lp.extract(args), hp.extract(args), op.extract(args), pp.extract(args), args.test_iterations, args.save_iterations, args.checkpoint_iterations, args.start_checkpoint, args.debug_from, args.expname)
  File "train.py", line 284, in training
    scene = Scene(dataset, gaussians, load_coarse=None)
  File "/data/data_2t/4DGaussians/scene/__init__.py", line 56, in __init__
    scene_info = sceneLoadTypeCallbacks["nerfies"](args.source_path, False, args.eval)
  File "/data/data_2t/4DGaussians/scene/dataset_readers.py", line 386, in readHyperDataInfos
    pcd = fetchPly(ply_path)
  File "/data/data_2t/4DGaussians/scene/dataset_readers.py", line 125, in fetchPly
    plydata = PlyData.read(path)
  File "/home/kevin/.local/lib/python3.8/site-packages/plyfile.py", line 158, in read
    (must_close, stream) = _open_stream(stream, 'read')
  File "/home/kevin/.local/lib/python3.8/site-packages/plyfile.py", line 1345, in _open_stream
    return (True, open(stream, read_or_write[0] + 'b'))
FileNotFoundError: [Errno 2] No such file or directory: '/data/data_2t/4DGaussians/data/hypernerf/vrig/broom2/points3D_downsample.ply'

I would appreciate it if you could provide guidance on any missing steps or potential solutions for this issue. Thank you.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.