GithubHelp home page GithubHelp logo

xuan-li / pac-nerf Goto Github PK

View Code? Open in Web Editor NEW
250.0 250.0 12.0 63.56 MB

Physics Augmented Continuum Neural Radiance Fields for Geometry-Agnostic System Identification

License: MIT License

Python 84.17% C++ 3.87% Cuda 11.96%

pac-nerf's People

Contributors

xuan-li avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

pac-nerf's Issues

Global store may lose precision

@xuan-li Hi, thanks for your nice work. however, I encountered the following warning:

[W 04/05/23 08:28:58.286 16373] [type_check.cpp:type_check_store@36] [$18318] Local store may lose precision: f32 <- f64
[W 04/05/23 08:28:58.286 16373] [type_check.cpp:type_check_store@36] [$18320] Local store may lose precision: f32 <- f64
[W 04/05/23 08:28:58.286 16373] [type_check.cpp:type_check_store@36] [$18322] Local store may lose precision: f32 <- f64
[W 04/05/23 08:28:58.286 16373] [type_check.cpp:type_check_store@36] [$18324] Local store may lose precision: f32 <- f64
[W 04/05/23 08:28:58.286 16373] [type_check.cpp:type_check_store@36] [$18326] Local store may lose precision: f32 <- f64
[W 04/05/23 08:28:58.286 16373] [type_check.cpp:type_check_store@36] [$18328] Local store may lose precision: f32 <- f64
[W 04/05/23 08:28:58.286 16373] [type_check.cpp:type_check_store@36] [$18330] Local store may lose precision: f32 <- f64
[W 04/05/23 08:28:58.286 16373] [type_check.cpp:type_check_store@36] [$18332] Local store may lose precision: f32 <- f64
[W 04/05/23 08:28:58.286 16373] [type_check.cpp:type_check_store@36] [$18334] Local store may lose precision: f32 <- f64
[W 04/05/23 08:29:17.144 16373] [type_check.cpp:type_check_store@36] [$77533] Global store may lose precision: i8 <- i32
File "/opt/data/private/PAC-NeRF-main/lib/engine/mpm_simulator.py", line 318, in check_cfl:
self.cfl_satisfy[None] = 0
^^^^^^^^^^^^^^^^^^^^^^^^^^
[W 04/05/23 08:29:17.144 16373] [type_check.cpp:type_check_store@36] [$77572] Global store may lose precision: i8 <- i32
File "/opt/data/private/PAC-NeRF-main/lib/engine/mpm_simulator.py", line 320, in check_cfl:
self.cfl_satisfy[None] = 0
^^^^^^^^^^^^^^^^^^^^^^^^^^
[Forward] loss: 0.14419050514698029: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:36<00:00, 7.29s/it]
Time elaspsed: 1071.8159348964691
[Backward]: 0%| | 0/5 [00:00<?, ?it/s][W 04/05/23 08:30:06.992 16373] [type_check.cpp:type_check_store@36] [$173665] Atomic add may lose precision: f32 <- f64
File "/opt/data/private/PAC-NeRF-main/lib/engine/mpm_simulator.py", line 157, in svd_grad:
self.F_tmp.grad[p] += self.backward_svd(self.U.grad[p].cast(ti.f64),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
self.sig.grad[p].cast(ti.f64), self.V.grad[p].cast(ti.f64),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
self.U[p].cast(ti.f64), self.sig[p].cast(ti.f64), self.V[p].cast(ti.f64))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

[W 04/05/23 08:30:06.992 16373] [type_check.cpp:type_check_store@36] [$173670] Atomic add may lose precision: f32 <- f64
File "/opt/data/private/PAC-NeRF-main/lib/engine/mpm_simulator.py", line 157, in svd_grad:
self.F_tmp.grad[p] += self.backward_svd(self.U.grad[p].cast(ti.f64),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
self.sig.grad[p].cast(ti.f64), self.V.grad[p].cast(ti.f64),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
self.U[p].cast(ti.f64), self.sig[p].cast(ti.f64), self.V[p].cast(ti.f64))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

[W 04/05/23 08:30:06.992 16373] [type_check.cpp:type_check_store@36] [$173675] Atomic add may lose precision: f32 <- f64
File "/opt/data/private/PAC-NeRF-main/lib/engine/mpm_simulator.py", line 157, in svd_grad:
self.F_tmp.grad[p] += self.backward_svd(self.U.grad[p].cast(ti.f64),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
self.sig.grad[p].cast(ti.f64), self.V.grad[p].cast(ti.f64),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
self.U[p].cast(ti.f64), self.sig[p].cast(ti.f64), self.V[p].cast(ti.f64))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

[W 04/05/23 08:30:06.992 16373] [type_check.cpp:type_check_store@36] [$173680] Atomic add may lose precision: f32 <- f64
File "/opt/data/private/PAC-NeRF-main/lib/engine/mpm_simulator.py", line 157, in svd_grad:
self.F_tmp.grad[p] += self.backward_svd(self.U.grad[p].cast(ti.f64),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
self.sig.grad[p].cast(ti.f64), self.V.grad[p].cast(ti.f64),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
self.U[p].cast(ti.f64), self.sig[p].cast(ti.f64), self.V[p].cast(ti.f64))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

[W 04/05/23 08:30:06.992 16373] [type_check.cpp:type_check_store@36] [$173685] Atomic add may lose precision: f32 <- f64
File "/opt/data/private/PAC-NeRF-main/lib/engine/mpm_simulator.py", line 157, in svd_grad:
self.F_tmp.grad[p] += self.backward_svd(self.U.grad[p].cast(ti.f64),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
self.sig.grad[p].cast(ti.f64), self.V.grad[p].cast(ti.f64),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
self.U[p].cast(ti.f64), self.sig[p].cast(ti.f64), self.V[p].cast(ti.f64))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

[W 04/05/23 08:30:06.992 16373] [type_check.cpp:type_check_store@36] [$173690] Atomic add may lose precision: f32 <- f64
File "/opt/data/private/PAC-NeRF-main/lib/engine/mpm_simulator.py", line 157, in svd_grad:
self.F_tmp.grad[p] += self.backward_svd(self.U.grad[p].cast(ti.f64),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
self.sig.grad[p].cast(ti.f64), self.V.grad[p].cast(ti.f64),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
self.U[p].cast(ti.f64), self.sig[p].cast(ti.f64), self.V[p].cast(ti.f64))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

[W 04/05/23 08:30:06.992 16373] [type_check.cpp:type_check_store@36] [$173695] Atomic add may lose precision: f32 <- f64
File "/opt/data/private/PAC-NeRF-main/lib/engine/mpm_simulator.py", line 157, in svd_grad:
self.F_tmp.grad[p] += self.backward_svd(self.U.grad[p].cast(ti.f64),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
self.sig.grad[p].cast(ti.f64), self.V.grad[p].cast(ti.f64),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
self.U[p].cast(ti.f64), self.sig[p].cast(ti.f64), self.V[p].cast(ti.f64))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

[W 04/05/23 08:30:06.992 16373] [type_check.cpp:type_check_store@36] [$173700] Atomic add may lose precision: f32 <- f64
File "/opt/data/private/PAC-NeRF-main/lib/engine/mpm_simulator.py", line 157, in svd_grad:
self.F_tmp.grad[p] += self.backward_svd(self.U.grad[p].cast(ti.f64),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
self.sig.grad[p].cast(ti.f64), self.V.grad[p].cast(ti.f64),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
self.U[p].cast(ti.f64), self.sig[p].cast(ti.f64), self.V[p].cast(ti.f64))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

[W 04/05/23 08:30:06.992 16373] [type_check.cpp:type_check_store@36] [$173705] Atomic add may lose precision: f32 <- f64
File "/opt/data/private/PAC-NeRF-main/lib/engine/mpm_simulator.py", line 157, in svd_grad:
self.F_tmp.grad[p] += self.backward_svd(self.U.grad[p].cast(ti.f64),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
self.sig.grad[p].cast(ti.f64), self.V.grad[p].cast(ti.f64),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
self.U[p].cast(ti.f64), self.sig[p].cast(ti.f64), self.V[p].cast(ti.f64))

mmcv version error

Looks like in the recent mmcv update, they have a weird update of config setting changed from mmcv to mmengine
I tried some modification but just decide to downgrade it to 1.7.1

I guess you can add this requirement in the requirement

Thanks!

ValueError when running the Cat example

I am facing a float error when training the velocity for cat case

[W 07/06/23 15:30:56.874 2724704] [type_check.cpp:type_check_store@36] [$177661] Atomic add may lose precision: f32 <- f64
File "/DATA_EDS/louhz/PAC-NeRF/lib/engine/mpm_simulator.py", line 157, in svd_grad:
self.F_tmp.grad[p] += self.backward_svd(self.U.grad[p].cast(ti.f64),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
self.sig.grad[p].cast(ti.f64), self.V.grad[p].cast(ti.f64),

After look into the Taichi's discussion
I find a issue:taichi-dev/taichi#5059

Looks like Taichi did not fully support float64 if my understanding is correct

Do you have any suggestion about this ?
Thanks!

Run with a lower version of cuda

Hi, thanks for your nice work. Can I use the lower version of cuda to run? My server is limited to use the version below 11.2

Ground-truth physical parameters/initial velocities?

Hi thanks for open source the code. May I ask how to get the ground-truth physical parameters (e.g., Young's modulus, Poisson's ration) for experiments in Table 2? Also what is the ground-truth initial velocities for both Table 1 and Table 2?

Increase the resolution of the grid

Hello, I want to try to increase the resolution of the grid to realize the representation of more detailed objects, but I have a problem, how can I solve it? Have you ever tried to improve the grid resolution?

I tried to change pg_scale = [1000, 2000, 4000]to pg_scale= [1000, 2000, 3000,4000] ,and then encountered an error:

Traceback (most recent call last): File "/opt/data/private/PAC-NeRF-main/train.py", line 287, in <module> train_static(cfg, pnerf, optimizer, start, cfg['N_static'], rays_o_all, rays_d_all, viewdirs_all, rgb_all, ray_mask_all) File "/opt/data/private/PAC-NeRF-main/train.py", line 163, in train_static global_loss = pnerf.forward(1, rays_o_all, File "/opt/data/private/PAC-NeRF-main/lib/pac_nerf.py", line 204, in forward self.dynamic_observer.initialize(self.init_particles, self.init_features, self.init_velocities, self.init_rhos, self.init_mu, self.init_lam, self.nerf.voxel_size, self.init_yield_stress, self.init_plastic_viscosity, self.init_friction_alpha, self.cohesion) File "/opt/data/private/PAC-NeRF-main/lib/engine/dynamic_observer.py", line 160, in initialize self.from_torch(particles.data.cpu().numpy(), features.data.cpu().numpy(), velocities.data.cpu().numpy(), particle_rho.data.cpu().numpy(), particle_mu.data.cpu().numpy(), particle_lam.data.cpu().numpy()) File "/root/miniconda3/envs/pacnerf/lib/python3.9/site-packages/taichi/lang/kernel_impl.py", line 1002, in __call__ return self._primal(self._kernel_owner, *args, **kwargs) File "/root/miniconda3/envs/pacnerf/lib/python3.9/site-packages/taichi/lang/kernel_impl.py", line 869, in __call__ return self.runtime.compiled_functions[key](*args) File "/root/miniconda3/envs/pacnerf/lib/python3.9/site-packages/taichi/lang/kernel_impl.py", line 785, in func__ raise e from None File "/root/miniconda3/envs/pacnerf/lib/python3.9/site-packages/taichi/lang/kernel_impl.py", line 782, in func__ t_kernel(launch_ctx) RuntimeError: [cuda_driver.h:operator()@87] CUDA Error CUDA_ERROR_ASSERT: device-side assert triggered while calling stream_synchronize (cuStreamSynchronize) [E 04/07/23 02:37:09.064 434] [cuda_driver.h:operator()@87] CUDA Error CUDA_ERROR_ASSERT: device-side assert triggered while calling stream_synchronize (cuStreamSynchronize)

Is PAC-NeRF capable of simulating water?

Hi, I generate a water drop case with 5 input views and run PAC-NeRF training with setting material=MPMSimulator.viscous_fluid, mu=0.1 and the results failed.
Have you ever tried similar scenario?

input.mp4
video_0.rgb.mp4

Simulation data

Hi. May I ask how to open and view the simulation data? The 3D viewer of Windows11 says unable to load 3D model.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.