GithubHelp home page GithubHelp logo

hbb1 / 2d-gaussian-splatting Goto Github PK

View Code? Open in Web Editor NEW
1.7K 40.0 87.0 9.36 MB

[SIGGRAPH'24] 2D Gaussian Splatting for Geometrically Accurate Radiance Fields

Home Page: https://surfsplatting.github.io

License: Other

Python 100.00%
novel-view-synthesis surface-reconstruction gaussian-splatting

2d-gaussian-splatting's People

Contributors

hbb1 avatar hwanhuh avatar rongliu-leo avatar yubel426 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

2d-gaussian-splatting's Issues

DTU camera Poses

Thanks for the released code!
I notice that you generate SFM point cloud with COLMAP given GT camera poses (cameras.npz).
Could you share the scripts of this step as I want to test more DTU scenes?
Thanks a lot!

Train error with attributes of gs become nan

Thank you for your awesome work.

But when I run on 360v2 dateset like garden, your model trained extremely slow in first 400 iters, then the attributes of gs become nan.

I didn't set the arguments for regularizations.

Please give me some help.

截图 2024-05-14 18-14-16

Mask Rendering Loss

Hi, great work!
render_pkg = render(viewpoint_cam, gaussians, pipe, background)
the render_pkg contains 'rend_alpha', if it can be seen as a rendering mask image?
I wonder if it supports optimizing the scene geometry by mask rendering loss. Do you think it is useful to filter unuseful gs points?

Thanks a lot!

Installation error

Hi, I am trying to install on Windows 11. And I met this error:

Processing d:\workspace\2d-gaussian-splatting\submodules\diff-surfel-rasterization
  Preparing metadata (setup.py): started
  Preparing metadata (setup.py): finished with status 'error'

Pip subprocess error:
  error: subprocess-exited-with-error

  × python setup.py egg_info did not run successfully.
  │ exit code: 1
  ╰─> [8 lines of output]
      Traceback (most recent call last):
        File "<string>", line 2, in <module>
        File "<pip-setuptools-caller>", line 34, in <module>
        File "D:\workspace\2d-gaussian-splatting\submodules\diff-surfel-rasterization\setup.py", line 13, in <module>
          from torch.utils.cpp_extension import CUDAExtension, BuildExtension
        File "C:\Users\Up2U\miniconda3\envs\surfel_splatting\lib\site-packages\torch\utils\cpp_extension.py", line 25, >
          from pkg_resources import packaging  # type: ignore[attr-defined]
      ImportError: cannot import name 'packaging' from 'pkg_resources' (C:\Users\Up2U\miniconda3\envs\surfel_splatting\)
      [end of output]

  note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed

× Encountered error while generating package metadata.
╰─> See above for output.

And I compared with 3DGS's environment.yml, I found no cudatoolkit in dependencies.
Then I tried to add it, and still having this error.

Any idea about this?
Thank you.

A way to use the 3dgs Viewer

I made some minor modifications to your code so that it can be used with the viewer provided by 3DGS. Here's my approach:

  1. change the codes in function construct_list_of_attributes from scene/gaussian_model.py like this:
l = ['x', 'y', 'z', 'nx', 'ny', 'nz']
        # All channels except the 3 DC
        for i in range(self._features_dc.shape[1]*self._features_dc.shape[2]):
            l.append('f_dc_{}'.format(i))
        for i in range(self._features_rest.shape[1]*self._features_rest.shape[2]):
            l.append('f_rest_{}'.format(i))
        l.append('opacity')
        for i in range(self._scaling.shape[1] + 1):
            l.append('scale_{}'.format(i))
        ......

By this way, the scales can have 3 dim, the same as 3dgs.
2. change the codes in fuction save_ply from scene/gaussian_model.py like this:

mkdir_p(os.path.dirname(path))

        xyz = self._xyz.detach().cpu().numpy()
        normals = np.zeros_like(xyz)
        f_dc = self._features_dc.detach().transpose(1, 2).flatten(start_dim=1).contiguous().cpu().numpy()
        f_rest = self._features_rest.detach().transpose(1, 2).flatten(start_dim=1).contiguous().cpu().numpy()
        opacities = self._opacity.detach().cpu().numpy()
        scale = self._scaling.detach().cpu().numpy()
        rotation = self._rotation.detach().cpu().numpy()
        one_line = np.full((scale.shape[0],1), 0.0000000001)
        one_line = np.log(one_line)
        scale = np.concatenate((scale, one_line), axis=1)
        ......

By this way, the 2D gaussian is changed into 3D gaussian with a zeros_like dimension.

MipNeRF360 Training Time

Excellent Work!
I'm evaluating the MipNeRF360 dataset on one RTX4090 GPU.
The average training time on the 9 scenes is 29 min (Already move grid_x grid_y to GPU)
I wonder if this training time is normal or not since you didn't report the training time on your paper.
Thank you!

Campus Unbound scene extracts triangular mesh incorrectly

Hi, I entered a campus scene taken by a drone and the rendered image is correct, but the extracted triangle mesh is wrong. Here are the commands I am executing:
python render.py -m <path to pre-trained model> -s <path to COLMAP dataset> --mesh_res 1024 --unbounded
Did I execute the wrong command or do I need to further optimize the mesh extraction algorithm?

Extract Mesh Problem

Loading trained model at iteration 30000
Reading camera 240/240
Loading Training Cameras
Loading Test Cameras
export training images ...
reconstruct radiance fields: 240it [00:03, 64.30it/s]
export images: 240it [02:02, 1.97it/s]
export mesh ...
reconstruct radiance fields: 240it [00:02, 82.01it/s]
Running tsdf volume integration ...
voxel_size: 0.004
sdf_trunc: 0.02
depth_truc: 3.0
TSDF integration progress: 240it [00:01, 159.15it/s]
[Open3D WARNING] Write PLY failed: mesh has 0 vertices.
mesh saved at /root/autodl-tmp/project/2d-gaussian-splatting/output/127876ac-b/train/ours_30000/fuse.ply
post processing the mesh to have 1000 clusterscluster_to_kep
[Open3D DEBUG] [ClusterConnectedTriangles] Compute triangle adjacency
[Open3D DEBUG] [ClusterConnectedTriangles] Done computing triangle adjacency
[Open3D DEBUG] [ClusterConnectedTriangles] Done clustering, #clusters=0
Traceback (most recent call last):
File "render.py", line 102, in
mesh_post = post_process_mesh(mesh, cluster_to_keep=args.num_cluster)
File "/root/autodl-tmp/project/2d-gaussian-splatting/utils/mesh_utils.py", line 35, in post_process_mesh
n_cluster = np.sort(cluster_n_triangles.copy())[-cluster_to_keep]
IndexError: index -1000 is out of bounds for axis 0 with size 0.

I train and render with my own dataset. What is the possible problem?

cuda_rasterizer-->backward.cu中的361行-->368行中的final_A, final_D,final_D2需要随着循环变化吗?

#else
			dL_dweight += (final_D2 + m_d * m_d * final_A - 2 * m_d * final_D) * dL_dreg;
#endif
			dL_dalpha += dL_dweight - last_dL_dT;
			// propagate the current weight W_{i} to next weight W_{i-1}
			last_dL_dT = dL_dweight * alpha + (1 - alpha) * last_dL_dT;
			float dL_dmd = 2.0f * (T * alpha) * (m_d * final_A - final_D) * dL_dreg;
			dL_dz += dL_dmd * dmd_dd;

尤其367行的final_A和final_D要随循环的变化而变化吗?

How can I solve the poor reconstruction results?

I took multiple views of a sofa in Blender with a background, as shown in Figure 1. The reconstruction effect of the exported mesh file is not very good, as shown in Figure 2. And I looked at the 2D Gaussian point cloud, as shown in Figure 3. May I ask how I can solve this problem?
1
2
3

Unwanted behaviour when using white background once normal loss starts

I'm training on a set of images that have been preprocessed by applying a segmentation mask, i.e. the background is white.

Training works well up until iteration 7000, which is until the normal loss kicks in:
imageData

However, after I start applying the normal loss, I get unwanted gaussians, such as seen here:
imageData

Which eventually grow to fill the entire background:
imageData

Is there any way to fix this behavior with the current implementation?

How to feed data in IDR format

Hello @hbb1,

Thanks for your fantastic work!

I had a query. I've processed my data in the IDR format. How do I feed it to the model since it takes COLMAP format data here?

My data format:

CASE_NAME
|-- cameras.npz    # camera parameters
|-- image
    |-- 000.png        # image for each view
    |-- 001.png
    ...
|-- mask
    |-- 000.png        # mask for each view
    |-- 001.png
    ...

online SIBR viewer for 2dgs

hi, thanks for your great work, is there SIBR viewer for 2dgs, origin SIBR viewer for 3dgs not work for 2dgs
Looking forward to your reply, thanks!
@hbb1

Question about synthetic dataset

Hi!
When trying to reconstruct the mesh on the synthetic, I noticed a significant presence of black erroneous faces in the output. Have you tested on this dataset before? Should I consider additional hyperparameter adjustments? Any insights or suggestions you could offer for this issue would be greatly appreciated.
1716974362113
1716974419046

Could not recognize scene type!

When I trained the model on my dataset, it went smoothly, but there were some issues with rendering. The result is shown below
0ZJR)}6DML SUMB%Y3EG1DD

The number of points keeps growing in the RefReal dataset

Hello, I use 2dgs to train the gardenspheres scene under the ref_real data set of RefNerf. The Gaussian points will continue to grow. When I train about 15,000 times, they will grow to more than 10 million, causing the memory to explode. Can you tell me the possible causes of this problem?
I train with --lambda_dist 100 --depth_ratio 0. On 4090.

view the rendered scene

Thank you for your excellent work. I have obtained a good mesh and would like to know how to view the rendered scene. Could you please help me with that?

IndexError: index -1000 is out of bounds for axis 0 with size 0

CUDA_VISIBLE_DEVICES=0 python render.py -m outputs/backpack -s datas/backpack -r 2 --depth_ratio 1 --depth_trunc 3 --voxel_size 0.004 --skip_test --skip_train

Looking for config file in outputs/backpack/cfg_args
Config file found: outputs/backpack/cfg_args
Rendering outputs/backpack
Loading trained model at iteration 15000
Reading camera 29/29
Loading Training Cameras
Loading Test Cameras
export mesh ...
reconstruct radiance fields: 29it [00:01, 25.22it/s]
Running tsdf volume integration ...
voxel_size: 0.004
sdf_trunc: 0.02
depth_truc: 3.0
TSDF integration progress: 29it [00:00, 71.96it/s]
[Open3D WARNING] Write PLY failed: mesh has 0 vertices.
mesh saved at outputs/backpack/train/ours_15000/fuse.ply
post processing the mesh to have 1000 clusterscluster_to_kep
[Open3D DEBUG] [ClusterConnectedTriangles] Compute triangle adjacency
[Open3D DEBUG] [ClusterConnectedTriangles] Done computing triangle adjacency
[Open3D DEBUG] [ClusterConnectedTriangles] Done clustering, #clusters=0
Traceback (most recent call last):
File "/remote-home/hbyang/codes/3DV/2d-gaussian-splatting/render.py", line 102, in
mesh_post = post_process_mesh(mesh, cluster_to_keep=args.num_cluster)
File "/remote-home/hbyang/codes/3DV/2d-gaussian-splatting/utils/mesh_utils.py", line 35, in post_process_mesh
n_cluster = np.sort(cluster_n_triangles.copy())[-cluster_to_keep]
IndexError: index -1000 is out of bounds for axis 0 with size 0

custom dataset

I really like the work you do!! But I would like to ask when I want to use the data from my own reality, what form do I need to process the data into, what are the details of the input files needed, and how do I set up multi-device operation if I have more than one gpu? thanks

DTU dataset evaluation

In evaluating the DTU dataset, is it the same for the training and testing images set?

RuntimeError: Function _RasterizeGaussiansBackward returned an invalid gradient at index 5 - got [231550, 2] but expected shape compatible with [231550, 3]

RuntimeError: Function _RasterizeGaussiansBackward returned an invalid gradient at index 5 - got [231550, 2] but expected shape compatible with [231550, 3]

I checked the input and the computed grad size, but it doesn't match:

Forward grad_scales shape: torch.Size([231550, 3]) [26/05 08:46:47]
Backward grad_scales shape: torch.Size([231550, 2]) [26/05 08:46:47]

What should I do to address this problem?

Many thanks!

Add Mask

Hello, I'm sorry to bother you. How do you remove the background from the image? I inputted the corresponding mask image, but the result he rendered still includes the background

FileNotFoundError: [Errno 2] No such file or directory: '/Offical_DTU_Dataset/Points/stl/ObsMask/ObsMask24_10.mat'

Thank you very much for your excellent work. I am new to this field and there is no problem with training and testing when reproducing your code. But when I run the full evaluation of DTU data, I get the following error:

mesh post processed saved at ./eval/dtu/scan122/train/ours_30000/fuse_post.ply
cull mesh ....
Culling mesh given masks: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 49/49 [01:06<00:00, 1.37s/it]
masking data pcd: 44%|█████████████████████████████████████████████████████████████████████████████████████▊ | 4/9 [00:36<00:56, 11.31s/it]Traceback (most recent call last):
File "/root/anaconda3/envs/surfel_splatting/lib/python3.8/site-packages/scipy/io/matlab/_mio.py", line 39, in _open_file
return open(file_like, mode), True
FileNotFoundError: [Errno 2] No such file or directory: '/root/autodl-tmp/dtu_eval/Offical_DTU_Dataset/Points/stl/ObsMask/ObsMask24_10.mat'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/root/autodl-tmp/2d/scripts/eval_dtu/eval.py", line 98, in
obs_mask_file = loadmat(f'{args.dataset_dir}/ObsMask/ObsMask{args.scan}_10.mat')
File "/root/anaconda3/envs/surfel_splatting/lib/python3.8/site-packages/scipy/io/matlab/_mio.py", line 225, in loadmat
with _open_file_context(file_name, appendmat) as f:
File "/root/anaconda3/envs/surfel_splatting/lib/python3.8/contextlib.py", line 113, in enter
return next(self.gen)
File "/root/anaconda3/envs/surfel_splatting/lib/python3.8/site-packages/scipy/io/matlab/_mio.py", line 17, in _open_file_context
f, opened = _open_file(file_like, appendmat, mode)
File "/root/anaconda3/envs/surfel_splatting/lib/python3.8/site-packages/scipy/io/matlab/_mio.py", line 45, in _open_file
return open(file_like, mode), True
FileNotFoundError: [Errno 2] No such file or directory: '/root/autodl-tmp/dtu_eval/Offical_DTU_Dataset/Points/stl/ObsMask/ObsMask24_10.mat'
masking data pcd: 44%|█████████████████████████████████████████████████████████████████████████████████████▊ | 4/9 [00:36<00:45, 9.18s/it]
cull mesh ....

I downloaded the Points file based on the DTU point cloud you provided, but it only contains the STL ply file, without the ObsMask's ObsMaskxxx_10.mat and Planexxx.mat files. How can I solve this problem?

Inquiry Regarding Tn(normal) Calculation

Recently, I've been quite confused by matrix multiplication. I noticed that in forward.cu, the normal is calculated by multiplying it with the viewMatrix, which transforms the normals into camera space. However, in render/init.py, the viewMatrix is multiplied again when rendering it as the final normal_map. My question is, why do we need to multiply the viewMatrix twice, considering that I believe the final normal_map is already in camera space?

extracted mesh broken

Thanks for your great work and sharing the code. I tried out a bit and get a broken mesh. Am I configuring it wrong? Here are my commands and the results.

(gs) C:\_piper\2d-gaussian-splatting>python train.py -s D:/Data/nerf_synthetic/lego --depth_ratio 0
Optimizing
Output folder: ./output/af85311a-6 [07/05 13:57:45]
Tensorboard not available: not logging progress [07/05 13:57:45]
Found transforms_train.json file, assuming Blender data set! [07/05 13:57:45]
Reading Training Transforms [07/05 13:57:45]
Reading Test Transforms [07/05 13:57:59]
Loading Training Cameras [07/05 13:58:24]
Loading Test Cameras [07/05 13:58:28]
Number of points at initialisation :  100000 [07/05 13:58:28]
Training progress:  23%|████████████████████                                                                  | 7000/30000 [14:21<46:07,  8.31it/s, Loss=0.00785, distort=0.00001, normal=0.00000, Points=161496]
[ITER 7000] Evaluating train: L1 0.006597565952688456 PSNR 33.03615760803223 [07/05 14:12:50]

[ITER 7000] Saving Gaussians [07/05 14:12:50]
Training progress: 100%|█████████████████████████████████████████████████████████████████████████████████████| 30000/30000 [56:53<00:00,  8.79it/s, Loss=0.00702, distort=0.00002, normal=0.02754, Points=165112]

[ITER 30000] Evaluating train: L1 0.005794808687642217 PSNR 33.47078514099121 [07/05 14:55:22]

[ITER 30000] Saving Gaussians [07/05 14:55:22]

Training complete. [07/05 14:55:23]

python render.py -s D:/Data/nerf_synthetic/lego -m ./output/b157e902-3

(gs) C:\_piper\2d-gaussian-splatting>python render.py -s D:/Data/nerf_synthetic/lego -m ./output/b157e902-3/
Looking for config file in ./output/b157e902-3/cfg_args
Config file found: ./output/b157e902-3/cfg_args
Rendering ./output/b157e902-3/
Loading trained model at iteration 30000
Found transforms_train.json file, assuming Blender data set!
Reading Training Transforms
Reading Test Transforms
Loading Training Cameras
Loading Test Cameras
export training images ...
reconstruct radiance fields: 300it [00:10, 28.07it/s]
export images: 300it [03:22,  1.48it/s]
export mesh ...
reconstruct radiance fields: 300it [00:08, 34.71it/s]
Running tsdf volume integration ...
voxel_size: 0.004
sdf_trunc: 0.02
depth_truc: 3.0
TSDF integration progress: 300it [00:15, 19.62it/s]
mesh saved at ./output/b157e902-3/train\ours_30000\fuse.ply
post processing the mesh to have 1000 clusterscluster_to_kep
[Open3D DEBUG] [ClusterConnectedTriangles] Compute triangle adjacency
[Open3D DEBUG] [ClusterConnectedTriangles] Done computing triangle adjacency
[Open3D DEBUG] [ClusterConnectedTriangles] Done clustering, #clusters=1542
num vertices raw 393487
num vertices post 384212
mesh post processed saved at ./output/b157e902-3/train\ours_30000\fuse_post.ply

And the fuse_post.ply looks like this

image

Could you point out where I did wrong? Thank you.

How to remove noise points?

When training one's own background free dataset, there is noise in the output 2D GS points。And how can I remove them?

image
image

Code release

Thnaks for your excellent work. When will the code be released?

Training loss can not converge with custome cropped dataset.

Thanks for your great work! I tried with my own data and the training loss could not converge, all the training parameters are default. My data has been cropped and the principal point does not locate on the center of the image, i have modified the projection matrix accordingly and these modifications works fine with 3DGS, but do not work with 2DGS. I have no idea how to fix this, could you please give me some advises? Thank you:)
image

Indoor scene reconstruction

When I was testing indoor scenes, the results were poor. Have you ever tested indoor scene data? Or how should I adjust the model configuration?
Uploading indoorscene.jpg…

Error when rendering

My cmd is:
python render.py -m ./output/scan24-colmap/ -s /data1/Datasets/Colmap/scan24 --resolution 1024

Looking for config file in ./output/scan24-colmap/cfg_args
Config file found: ./output/scan24-colmap/cfg_args
Rendering ./output/scan24-colmap/
Loading trained model at iteration 30000
Reading camera 49/49
Loading Training Cameras
Loading Test Cameras
export training images ...
reconstruct radiance fields: 49it [00:02, 22.13it/s]
Traceback (most recent call last):
File "render.py", line 61, in
gaussExtractor.reconstruction(scene.getTrainCameras())
File "/root/miniconda3/envs/gaussian_splatting/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/data1/sm/2d-gaussian-splatting/utils/mesh_utils.py", line 113, in reconstruction
self.rgbmaps = torch.stack(self.rgbmaps, dim=0)
RuntimeError: stack expects each tensor to be equal size, but got [3, 767, 1024] at entry 0 and [3, 768, 1024] at entry 15

How to debug the CUDA portion of the code?

Hello, my project requires modifying the CUDA portion of your code, but since the CUDA code is part of PyTorch, I'm unsure how to debug it separately. I would really appreciate knowing how you debug the CUDA portion of your code.

DTU training time

Great work, congrats!
I'm confused about the training time in DTU exp. In your paper, it costs about 5.5 minutes/15k iters on one 3090 GPU. But I spent 30 minutes finishing it with the official command. Is there any modification needed for DTU data?
Thanks again for your awesome work.

About DTU dataset folder structure

Hello, I am a newbie about neural rendering and reconstruction. Thank you very much for your work. I now want to train and render the dtu dataset. What is the folder structure of the data set? Because the sequential information of dtu is stored in .txt file, it seems to be different from mip nerf360

Unsatisfying rendering result

After training both 2DGS and 3DGS on the same dataset(T&T, train), I notice that the rendering result of 2DGS is not as good as that of 3DGS.
The following examples are GT, 3DGS, 2DGS respectively:
00005
00010
00011
Both of them are trained for 30k iterations on RTX 4090, and the parameters are default.
I wonder what may cause the blur in the results of 2DGS.

RuntimeError: CUDA out of memory

Thanks for your excellent work!
In my training process, the image resolution is 2048*2048, and there is no scaling (using command '-r 1'). I want to iterate 30k, but after each run to 20k, the OOM problem will occur, could you tell me how to modify the code to fix this problem?

This is my complete command: python train.py -s /home/xxx/2d-gaussian-splatting/dataset/paper -m /home/xxx/2d-gaussian-splatting/result -r 1 --depth_ratio 1

This is the error:
Traceback (most recent call last):
File "/home/xxx/2d-gaussian-splatting/train.py", line 253, in
training(lp.extract(args), op.extract(args), pp.extract(args), args.test_iterations, args.save_iterations, args.checkpoint_iterations, args.start_checkpoint)
File "/home/xxx/2d-gaussian-splatting/train.py", line 69, in training
render_pkg = render(viewpoint_cam, gaussians, pipe, background)
File "/home/xxx/2d-gaussian-splatting/gaussian_renderer/init.py", line 97, in render
rendered_image, radii, allmap = rasterizer(
File "/home/xxx/anaconda3/envs/sugar/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/xxx/anaconda3/envs/sugar/lib/python3.9/site-packages/diff_surfel_rasterization/init.py", line 212, in forward
return rasterize_gaussians(
File "/home/xxx/anaconda3/envs/sugar/lib/python3.9/site-packages/diff_surfel_rasterization/init.py", line 32, in rasterize_gaussians
return _RasterizeGaussians.apply(
File "/home/xxx/anaconda3/envs/sugar/lib/python3.9/site-packages/diff_surfel_rasterization/init.py", line 92, in forward
num_rendered, color, depth, radii, geomBuffer, binningBuffer, imgBuffer = _C.rasterize_gaussians(*args)
RuntimeError: CUDA out of memory. Tried to allocate 3.26 GiB (GPU 0; 23.67 GiB total capacity; 11.02 GiB already allocated; 2.14 GiB free; 17.60 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Question about the depth distortion loss

Hi, thanks for your excellent work!

I noticed that you set the lambda_dist = 0.0 for the distortion loss in the latest version, and the last vision was set to 100.

May I ask why you set the "dist_loss = lambda_dist * (rend_dist).mean()" = 0 during the entire training iteration, and will it cause the final rendered depth to look bad?

Help with cov3D_precomp

Hi,

Thanks for the great work. Does your code allow taking covariance matrix as input? If I set "cov3D_precomp" to True, does the function build_covariance_from_scaling_rotation give me exactly the same operations as the ones in your cuda code if scales and rotations are given as input instead?

Thanks in advance

gaussians spread out at no background dataset

When I use a dataset with no background, gaussians begins to become scattered, they appear in the background position after 7k iterations, causing black areas to appear in the mesh. Is this phenomenon caused by Normal Consistency regularization? Is there any way to avoid this problem?
image
image

Artifacts in rendered image

Hi, while saving the images I have obversed that sometimes they tend to possess weird artifacts like in the one shown below. Is this something to do with the code I use for saving the intermediate images shown below.

if not os.path.exists(os.path.join('output', 'images')):
     os.makedirs(os.path.join('output', 'images'))
 io.imsave(os.path.join('output', 'images', '{0:05d}'.format(iteration) + ".png"), (image.permute(1,2,0).cpu().detach().numpy()*255).astype(np.uint8))

01100

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.