GithubHelp home page GithubHelp logo

zielon / insta Goto Github PK

View Code? Open in Web Editor NEW
400.0 20.0 30.0 79.64 MB

INSTA - Instant Volumetric Head Avatars [CVPR2023]

Home Page: https://zielon.github.io/insta/

License: Other

CMake 1.13% Cuda 19.40% C 44.36% C++ 31.22% TeX 0.02% HTML 0.55% Python 2.78% Shell 0.20% PowerShell 0.21% Makefile 0.06% JavaScript 0.01% Lua 0.04% Batchfile 0.01%
instant-ngp avatars cuda flame nerf volumetric-rendering 3dmm neural-network

insta's People

Contributors

zielon avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

insta's Issues

About test sequences

Hello!
May I ask if you have released the test dataset for the model ? How can I obtain the testing sequences portion of your dataset?

Accessible of Girls Datasets in your paper?

Hi, zielon, many thanks for your great work! It's pretty awesome!

May I inquire if you could open-source the girl's datasets utilized in your paper? Cause I noticed that all datasets you released are men. I think only men may bring some potential sexism issues.

Looking forward to your reply! Many Thanks!

Best,
Xiangyue

OptiX incompatibility

  • Windows 11
  • OptiX 7.7
  • Vulkan 1.3
  • Cuda 12.1
  • RTX 3090
  • Python 3.11
  • here's the installed packages:
    colorama==0.4.6 commentjson==0.9.0 imageio==2.28.1 lark-parser==0.7.8 numpy==1.23.5 opencv-python==4.7.0.72 Pillow==9.5.0 pybind11==2.10.4 pyquaternion==0.9.9 scipy==1.10.1 tqdm==4.65.0

I followed the installation instructions with a few changes to the Cmakelists.txt, namely, replacing the regular find_package(PythonLibs) with find_package(Python 3.7 COMPONENTS Interpreter Development) add_subdirectory("${CMAKE_CURRENT_SOURCE_DIR}/instant-ngp/dependencies/pybind11")
I was able to compile the current instant-ngp repo, but not this one.
the error i get is fatal error C1083: Cannot open include file: 'fmt/core.h': No such file or directory
cmake_output.log

I moved past that by reintroducing get_target_property(TCNN_INCLUDE_DIRECTORIES tiny-cuda-nn INCLUDE_DIRECTORIES) target_include_directories(optix_program PUBLIC ${NGP_INCLUDE_DIRECTORIES} ${TCNN_INCLUDE_DIRECTORIES})
into the CMakeLists.txt, but that also led to an error, suggesting an incompatibility with the OptiX library. I noticed that in the instant-ngp repo they manually supplied the optix header files, which seems to avoid this. NVlabs/instant-ngp@c14da84
So is there a strict dependency on a certain version of OptiX for this repo? If so, which one do I install?
Here's the log from immediately after modifying the CMakeLists.txt
cmake_output2.log
and here's the log from subsequent build attempts.
cmake_output3.log

Discrepancy in Parsing Results of face-parsing.PyTorch

Hello,

First of all, I would like to commend you on the excellent work you have done. It has provided valuable insights for related research.

While processing my own data using the face-parsing.PyTorch library (https://github.com/zllrunning/face-parsing.PyTorch), I noticed that the parsing results differed from your given examples. In the source code, a face is divided into multiple colors, creating a multicolored representation. You can see an example of this below:
image

However, the given examples showcase parsing colors that primarily consist of different levels of red, as shown in the following image:
00267

This leads me to inquire about the new part_colors variable in the face-parsing.PyTorch/test.py script of your project. I believe it plays a significant role in achieving the final optimized results.

Thank you for your attention, and I look forward to your response.

Best regards.

About the mesh to .obj file

when I use ./build/rta Loading from a checkpoint, and I want to keep training ,it seems can not training always. or I need click the "start training", but it only can train few steps, then stop training, how can I fix this?
And another question is, when I use this GUI export mesh obj file , the result is very bad , do you have any suggestion?

cmake build's error

Hi.
I'm trying to follow https://github.com/Zielon/INSTA#installation

"C:\Program Files\Git\bin\git.exe" clone -v --recurse-submodules --depth 1 --no-single-branch --progress "https://github.com/Zielon/INSTA.git" "C:/Users/user1/source/repos/Zielon/INSTA"

and facing with error:

PS C:\Users\user1\source\repos\Zielon\INSTA> cmake . -B build
-- Building for: Visual Studio 17 2022
-- Selecting Windows SDK version 10.0.22000.0 to target Windows 10.0.19045.
-- The C compiler identification is MSVC 19.37.32825.0
-- The CXX compiler identification is MSVC 19.37.32825.0
-- The CUDA compiler identification is NVIDIA 12.1.105
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.37.32822/bin/Hostx64/x64/cl.exe - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.37.32822/bin/Hostx64/x64/cl.exe - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Detecting CUDA compiler ABI info
-- Detecting CUDA compiler ABI info - done
-- Check for working CUDA compiler: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.1/bin/nvcc.exe - skipped
-- Detecting CUDA compile features
-- Detecting CUDA compile features - done
-- Targeting GPU architectures: 75
-- Module support is disabled.
-- Version: 9.0.0
-- Build type:
-- CXX_STANDARD: 14
-- Required features: cxx_variadic_templates
-- Found Vulkan: C:/VulkanSDK/1.3.268.0/Lib/vulkan-1.lib (found version "1.3.268") found components: glslc glslangValidator
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - not found
-- Found Threads: TRUE
-- Using Win32 for window creation
-- Found OpenMP_C: -openmp (found version "2.0")
-- Found OpenMP_CXX: -openmp (found version "2.0")
-- Found OpenMP: TRUE (found version "2.0")
-- OptiX_INSTALL_DIR value: C:\ProgramData\NVIDIA Corporation\OptiX SDK 8.0.0
-- Found PythonLibs: C:/Python311/libs/python311.lib (found version "3.11.0")
-- Configuring done (10.5s)
-- Generating done (0.1s)
-- Build files have been written to: C:/Users/user1/source/repos/Zielon/INSTA/build
PS C:\Users\user1\source\repos\Zielon\INSTA> cmake --build build --config RelWithDebInfo -j
MSBuild version 17.7.2+d6990bcfa for .NET Framework

  Checking Build System
  Building Custom Rule C:/Users/user1/source/repos/Zielon/INSTA/CMakeLists.txt
  Building Custom Rule C:/Users/user1/source/repos/Zielon/INSTA/instant-ngp/dependencies/glfw/src/CMakeLists.txt
  context.c
  init.c
  input.c
  monitor.c
  vulkan.c
  window.c
  win32_init.c
  win32_joystick.c
  win32_monitor.c
  win32_time.c
  win32_thread.c
  win32_window.c
  wgl_context.c
  egl_context.c
  osmesa_context.c
  Generating Code...
  glfw_objects.vcxproj -> C:\Users\user1\source\repos\Zielon\INSTA\build\instant-ngp\dependencies\glfw\src\glfw_obje
  cts.dir\RelWithDebInfo\glfw_objects.lib
  Compiling CUDA source file ..\instant-ngp\src\optix\pathescape.cu...
  Compiling CUDA source file ..\instant-ngp\src\optix\raystab.cu...
  Compiling CUDA source file ..\instant-ngp\src\optix\raytrace.cu...

  C:\Users\user1\source\repos\Zielon\INSTA\build>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.1\bin\nvcc.
  exe"  --use-local-env -ccbin "C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.37.32822\bin\H
  ostX64\x64" -x cu   -IC:\VulkanSDK\1.3.268.0\Include -I"C:\Users\user1\source\repos\Zielon\INSTA\instant-ngp\depen
  dencies\dlss\include" -I"C:\Users\user1\source\repos\Zielon\INSTA\instant-ngp\dependencies\glfw\include" -I"C:\Use
  rs\user1\source\repos\Zielon\INSTA\instant-ngp\dependencies\imgui" -I"C:\Users\user1\source\repos\Zielon\INSTA\
  instant-ngp\dependencies\gl3w" -I"C:\ProgramData\NVIDIA Corporation\OptiX SDK 8.0.0\include" -I"C:\Users\user1\sou
  rce\repos\Zielon\INSTA\instant-ngp\dependencies" -I"C:\Users\user1\source\repos\Zielon\INSTA\instant-ngp\dependenc
  ies\tiny-cuda-nn\include" -I"C:\Users\user1\source\repos\Zielon\INSTA\instant-ngp\dependencies\tiny-cuda-nn\depend
  encies" -I"C:\Users\user1\source\repos\Zielon\INSTA\instant-ngp\dependencies\tinylogger" -I"C:\Users\user1\sour
  ce\repos\Zielon\INSTA\instant-ngp\dependencies\eigen" -I"C:\Users\user1\source\repos\Zielon\INSTA\instant-ngp\depe
  ndencies\filesystem" -I"C:\Users\user1\source\repos\Zielon\INSTA\instant-ngp\dependencies\nanovdb" -IC:\Python311\
  include -I"C:\Users\user1\source\repos\Zielon\INSTA\instant-ngp\include" -IC:\Users\user1\source\repos\Zielon\I
  NSTA\include -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.1\include"     --keep-dir x64\RelWithDebInfo
  -maxrregcount=0   --machine 64 -ptx -cudart shared -std=c++14 --expt-relaxed-constexpr -Xcompiler="/EHsc -Zi -Ob1" -D
  _WINDOWS -DNDEBUG -DNGP_VULKAN -DGLFW_INCLUDE_VULKAN -DNGP_GUI -DNGP_OPTIX -D"NGP_VERSION=\"1.0dev\"" -DTCNN_MIN_GPU_
  ARCH=0 -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -o optix_program.dir\RelWithDe
  bInfo\pathescape.ptx "C:\Users\user1\source\repos\Zielon\INSTA\instant-ngp\src\optix\pathescape.cu"

  C:\Users\user1\source\repos\Zielon\INSTA\build>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.1\bin\nvcc.
  exe"  --use-local-env -ccbin "C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.37.32822\bin\H
  ostX64\x64" -x cu   -IC:\VulkanSDK\1.3.268.0\Include -I"C:\Users\user1\source\repos\Zielon\INSTA\instant-ngp\depen
  dencies\dlss\include" -I"C:\Users\user1\source\repos\Zielon\INSTA\instant-ngp\dependencies\glfw\include" -I"C:\Use
  rs\user1\source\repos\Zielon\INSTA\instant-ngp\dependencies\imgui" -I"C:\Users\user1\source\repos\Zielon\INSTA\
  instant-ngp\dependencies\gl3w" -I"C:\ProgramData\NVIDIA Corporation\OptiX SDK 8.0.0\include" -I"C:\Users\user1\sou
  rce\repos\Zielon\INSTA\instant-ngp\dependencies" -I"C:\Users\user1\source\repos\Zielon\INSTA\instant-ngp\dependenc
  ies\tiny-cuda-nn\include" -I"C:\Users\user1\source\repos\Zielon\INSTA\instant-ngp\dependencies\tiny-cuda-nn\depend
  encies" -I"C:\Users\user1\source\repos\Zielon\INSTA\instant-ngp\dependencies\tinylogger" -I"C:\Users\user1\sour
  ce\repos\Zielon\INSTA\instant-ngp\dependencies\eigen" -I"C:\Users\user1\source\repos\Zielon\INSTA\instant-ngp\depe
  ndencies\filesystem" -I"C:\Users\user1\source\repos\Zielon\INSTA\instant-ngp\dependencies\nanovdb" -IC:\Python311\
  include -I"C:\Users\user1\source\repos\Zielon\INSTA\instant-ngp\include" -IC:\Users\user1\source\repos\Zielon\I
  NSTA\include -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.1\include"     --keep-dir x64\RelWithDebInfo
  -maxrregcount=0   --machine 64 -ptx -cudart shared -std=c++14 --expt-relaxed-constexpr -Xcompiler="/EHsc -Zi -Ob1" -D
  _WINDOWS -DNDEBUG -DNGP_VULKAN -DGLFW_INCLUDE_VULKAN -DNGP_GUI -DNGP_OPTIX -D"NGP_VERSION=\"1.0dev\"" -DTCNN_MIN_GPU_
  ARCH=0 -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -o optix_program.dir\RelWithDe
  bInfo\raytrace.ptx "C:\Users\user1\source\repos\Zielon\INSTA\instant-ngp\src\optix\raytrace.cu"

  C:\Users\user1\source\repos\Zielon\INSTA\build>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.1\bin\nvcc.
  exe"  --use-local-env -ccbin "C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.37.32822\bin\H
  ostX64\x64" -x cu   -IC:\VulkanSDK\1.3.268.0\Include -I"C:\Users\user1\source\repos\Zielon\INSTA\instant-ngp\depen
  dencies\dlss\include" -I"C:\Users\user1\source\repos\Zielon\INSTA\instant-ngp\dependencies\glfw\include" -I"C:\Use
  rs\user1\source\repos\Zielon\INSTA\instant-ngp\dependencies\imgui" -I"C:\Users\user1\source\repos\Zielon\INSTA\
  instant-ngp\dependencies\gl3w" -I"C:\ProgramData\NVIDIA Corporation\OptiX SDK 8.0.0\include" -I"C:\Users\user1\sou
  rce\repos\Zielon\INSTA\instant-ngp\dependencies" -I"C:\Users\user1\source\repos\Zielon\INSTA\instant-ngp\dependenc
  ies\tiny-cuda-nn\include" -I"C:\Users\user1\source\repos\Zielon\INSTA\instant-ngp\dependencies\tiny-cuda-nn\depend
  encies" -I"C:\Users\user1\source\repos\Zielon\INSTA\instant-ngp\dependencies\tinylogger" -I"C:\Users\user1\sour
  ce\repos\Zielon\INSTA\instant-ngp\dependencies\eigen" -I"C:\Users\user1\source\repos\Zielon\INSTA\instant-ngp\depe
  ndencies\filesystem" -I"C:\Users\user1\source\repos\Zielon\INSTA\instant-ngp\dependencies\nanovdb" -IC:\Python311\
  include -I"C:\Users\user1\source\repos\Zielon\INSTA\instant-ngp\include" -IC:\Users\user1\source\repos\Zielon\I
  NSTA\include -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.1\include"     --keep-dir x64\RelWithDebInfo
  -maxrregcount=0   --machine 64 -ptx -cudart shared -std=c++14 --expt-relaxed-constexpr -Xcompiler="/EHsc -Zi -Ob1" -D
  _WINDOWS -DNDEBUG -DNGP_VULKAN -DGLFW_INCLUDE_VULKAN -DNGP_GUI -DNGP_OPTIX -D"NGP_VERSION=\"1.0dev\"" -DTCNN_MIN_GPU_
  ARCH=0 -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -o optix_program.dir\RelWithDe
  bInfo\raystab.ptx "C:\Users\user1\source\repos\Zielon\INSTA\instant-ngp\src\optix\raystab.cu"
C:\Users\user1\source\repos\Zielon\INSTA\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common.h(40): fa
tal error C1083: Cannot open include file: 'fmt/core.h': No such file or directory [C:\Users\user1\source\repos\Ziel
on\INSTA\build\optix_program.vcxproj]
C:\Users\user1\source\repos\Zielon\INSTA\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common.h(40): fa
tal error C1083: Cannot open include file: 'fmt/core.h': No such file or directory [C:\Users\user1\source\repos\Ziel
on\INSTA\build\optix_program.vcxproj]
  raytrace.cu
  pathescape.cu
C:\Program Files\Microsoft Visual Studio\2022\Community\MSBuild\Microsoft\VC\v170\BuildCustomizations\CUDA 12.1.targets
(799,9): error MSB3721: The command ""C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.1\bin\nvcc.exe"  --use-loc
al-env -ccbin "C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.37.32822\bin\HostX64\x64" -x cu
   -IC:\VulkanSDK\1.3.268.0\Include -I"C:\Users\user1\source\repos\Zielon\INSTA\instant-ngp\dependencies\dlss\includ
e" -I"C:\Users\user1\source\repos\Zielon\INSTA\instant-ngp\dependencies\glfw\include" -I"C:\Users\user1\source\re
pos\Zielon\INSTA\instant-ngp\dependencies\imgui" -I"C:\Users\user1\source\repos\Zielon\INSTA\instant-ngp\dependencie
s\gl3w" -I"C:\ProgramData\NVIDIA Corporation\OptiX SDK 8.0.0\include" -I"C:\Users\user1\source\repos\Zielon\INSTA\in
stant-ngp\dependencies" -I"C:\Users\user1\source\repos\Zielon\INSTA\instant-ngp\dependencies\tiny-cuda-nn\include" -
I"C:\Users\user1\source\repos\Zielon\INSTA\instant-ngp\dependencies\tiny-cuda-nn\dependencies" -I"C:\Users\user1\
source\repos\Zielon\INSTA\instant-ngp\dependencies\tinylogger" -I"C:\Users\user1\source\repos\Zielon\INSTA\instant-n
gp\dependencies\eigen" -I"C:\Users\user1\source\repos\Zielon\INSTA\instant-ngp\dependencies\filesystem" -I"C:\Users\
user1\source\repos\Zielon\INSTA\instant-ngp\dependencies\nanovdb" -IC:\Python311\include -I"C:\Users\user1\source
\repos\Zielon\INSTA\instant-ngp\include" -IC:\Users\user1\source\repos\Zielon\INSTA\include -I"C:\Program Files\NVID
IA GPU Computing Toolkit\CUDA\v12.1\include"     --keep-dir x64\RelWithDebInfo  -maxrregcount=0   --machine 64 -ptx -cu
dart shared -std=c++14 --expt-relaxed-constexpr -Xcompiler="/EHsc -Zi -Ob1" -D_WINDOWS -DNDEBUG -DNGP_VULKAN -DGLFW_INC
LUDE_VULKAN -DNGP_GUI -DNGP_OPTIX -D"NGP_VERSION=\"1.0dev\"" -DTCNN_MIN_GPU_ARCH=0 -D"CMAKE_INTDIR=\"RelWithDebInfo\""
-D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -o optix_program.dir\RelWithDebInfo\raytrace.ptx "C:\Users\user1\source\
repos\Zielon\INSTA\instant-ngp\src\optix\raytrace.cu"" exited with code 2. [C:\Users\user1\source\repos\Zielon\INSTA
\build\optix_program.vcxproj]
C:\Program Files\Microsoft Visual Studio\2022\Community\MSBuild\Microsoft\VC\v170\BuildCustomizations\CUDA 12.1.targets
(799,9): error MSB3721: The command ""C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.1\bin\nvcc.exe"  --use-loc
al-env -ccbin "C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.37.32822\bin\HostX64\x64" -x cu
   -IC:\VulkanSDK\1.3.268.0\Include -I"C:\Users\user1\source\repos\Zielon\INSTA\instant-ngp\dependencies\dlss\includ
e" -I"C:\Users\user1\source\repos\Zielon\INSTA\instant-ngp\dependencies\glfw\include" -I"C:\Users\user1\source\re
pos\Zielon\INSTA\instant-ngp\dependencies\imgui" -I"C:\Users\user1\source\repos\Zielon\INSTA\instant-ngp\dependencie
s\gl3w" -I"C:\ProgramData\NVIDIA Corporation\OptiX SDK 8.0.0\include" -I"C:\Users\user1\source\repos\Zielon\INSTA\in
stant-ngp\dependencies" -I"C:\Users\user1\source\repos\Zielon\INSTA\instant-ngp\dependencies\tiny-cuda-nn\include" -
I"C:\Users\user1\source\repos\Zielon\INSTA\instant-ngp\dependencies\tiny-cuda-nn\dependencies" -I"C:\Users\user1\
source\repos\Zielon\INSTA\instant-ngp\dependencies\tinylogger" -I"C:\Users\user1\source\repos\Zielon\INSTA\instant-n
gp\dependencies\eigen" -I"C:\Users\user1\source\repos\Zielon\INSTA\instant-ngp\dependencies\filesystem" -I"C:\Users\
user1\source\repos\Zielon\INSTA\instant-ngp\dependencies\nanovdb" -IC:\Python311\include -I"C:\Users\user1\source
\repos\Zielon\INSTA\instant-ngp\include" -IC:\Users\user1\source\repos\Zielon\INSTA\include -I"C:\Program Files\NVID
IA GPU Computing Toolkit\CUDA\v12.1\include"     --keep-dir x64\RelWithDebInfo  -maxrregcount=0   --machine 64 -ptx -cu
dart shared -std=c++14 --expt-relaxed-constexpr -Xcompiler="/EHsc -Zi -Ob1" -D_WINDOWS -DNDEBUG -DNGP_VULKAN -DGLFW_INC
LUDE_VULKAN -DNGP_GUI -DNGP_OPTIX -D"NGP_VERSION=\"1.0dev\"" -DTCNN_MIN_GPU_ARCH=0 -D"CMAKE_INTDIR=\"RelWithDebInfo\""
-D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -o optix_program.dir\RelWithDebInfo\pathescape.ptx "C:\Users\user1\sourc
e\repos\Zielon\INSTA\instant-ngp\src\optix\pathescape.cu"" exited with code 2. [C:\Users\user1\source\repos\Zielon\I
NSTA\build\optix_program.vcxproj]
C:\Users\user1\source\repos\Zielon\INSTA\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common.h(40): fa
tal error C1083: Cannot open include file: 'fmt/core.h': No such file or directory [C:\Users\user1\source\repos\Ziel
on\INSTA\build\optix_program.vcxproj]
  raystab.cu
C:\Program Files\Microsoft Visual Studio\2022\Community\MSBuild\Microsoft\VC\v170\BuildCustomizations\CUDA 12.1.targets
(799,9): error MSB3721: The command ""C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.1\bin\nvcc.exe"  --use-loc
al-env -ccbin "C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.37.32822\bin\HostX64\x64" -x cu
   -IC:\VulkanSDK\1.3.268.0\Include -I"C:\Users\user1\source\repos\Zielon\INSTA\instant-ngp\dependencies\dlss\includ
e" -I"C:\Users\user1\source\repos\Zielon\INSTA\instant-ngp\dependencies\glfw\include" -I"C:\Users\user1\source\re
pos\Zielon\INSTA\instant-ngp\dependencies\imgui" -I"C:\Users\user1\source\repos\Zielon\INSTA\instant-ngp\dependencie
s\gl3w" -I"C:\ProgramData\NVIDIA Corporation\OptiX SDK 8.0.0\include" -I"C:\Users\user1\source\repos\Zielon\INSTA\in
stant-ngp\dependencies" -I"C:\Users\user1\source\repos\Zielon\INSTA\instant-ngp\dependencies\tiny-cuda-nn\include" -
I"C:\Users\user1\source\repos\Zielon\INSTA\instant-ngp\dependencies\tiny-cuda-nn\dependencies" -I"C:\Users\user1\
source\repos\Zielon\INSTA\instant-ngp\dependencies\tinylogger" -I"C:\Users\user1\source\repos\Zielon\INSTA\instant-n
gp\dependencies\eigen" -I"C:\Users\user1\source\repos\Zielon\INSTA\instant-ngp\dependencies\filesystem" -I"C:\Users\
user1\source\repos\Zielon\INSTA\instant-ngp\dependencies\nanovdb" -IC:\Python311\include -I"C:\Users\user1\source
\repos\Zielon\INSTA\instant-ngp\include" -IC:\Users\user1\source\repos\Zielon\INSTA\include -I"C:\Program Files\NVID
IA GPU Computing Toolkit\CUDA\v12.1\include"     --keep-dir x64\RelWithDebInfo  -maxrregcount=0   --machine 64 -ptx -cu
dart shared -std=c++14 --expt-relaxed-constexpr -Xcompiler="/EHsc -Zi -Ob1" -D_WINDOWS -DNDEBUG -DNGP_VULKAN -DGLFW_INC
LUDE_VULKAN -DNGP_GUI -DNGP_OPTIX -D"NGP_VERSION=\"1.0dev\"" -DTCNN_MIN_GPU_ARCH=0 -D"CMAKE_INTDIR=\"RelWithDebInfo\""
-D_MBCS -D"CMAKE_INTDIR=\"RelWithDebInfo\"" -o optix_program.dir\RelWithDebInfo\raystab.ptx "C:\Users\user1\source\r
epos\Zielon\INSTA\instant-ngp\src\optix\raystab.cu"" exited with code 2. [C:\Users\user1\source\repos\Zielon\INSTA\b
uild\optix_program.vcxproj]
PS C:\Users\user1\source\repos\Zielon\INSTA>

I also checked this guide and it's repo https://github.com/NVlabs/instant-ngp#compilation, but it builds fine.

Not sure how an import in C works, but I found that file C:\Users\user1\source\repos\Zielon\INSTA\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/common.h:

...
#pragma once

// A macro is used such that external tools won't end up indenting entire files,
// resulting in wasted horizontal space.
#define TCNN_NAMESPACE_BEGIN namespace tcnn {
#define TCNN_NAMESPACE_END }

#include <tiny-cuda-nn/cpp_api.h>

#include <fmt/core.h>

#include <array>
#include <iostream>
#include <sstream>
#include <stdexcept>
#include <string>

#include <cuda_fp16.h>
...

and there is no fmt/core.h around it:

image

But I found it here:
C:\Users\user1\source\repos\Zielon\INSTA\instant-ngp\dependencies\tiny-cuda-nn\dependencies\fmt\include\fmt

Please, can you check again if all versions and commits are good since instant-ngp compiles fine for me, but not INSTA?

inference speed

It's a great work.
May I know the exact FPS for inference on average?

Could you provide the 3DMM shape coefficient `identity.npy` of each dataset?

Hi, Thanks for sharing the excellent work!

Now, I am going to perform a face reenactment. The provided datasets contain the 3DMM expression coefficient and camera pose of each frame. However, the 3DMM shape coefficient is not provided for each dataset. I use the MICA in README.md to estimate the shape coefficient, but the results are not satisfied since there is always a bias toward the canonical.obj you provided.

Therefore, I am wondering if you could provide the 3DMM shape coefficient identity.npy of each dataset (not for each frame) ?

Reenactment failure: the number of faces (8090) of provided canonical.obj is not equal to that of mesh (9766) from Metrical Photometric Tracker

Hi, thanks for sharing the excellent work!

I want to reenact the provided avatar with new expressions. So I need to use Metrical Photometric Tracker to track FLAME parameters of expression a from a video of another avatar and generate the mesh with the combination of canonical shape parameters and expression parameters of expression a.

However, the strange thing is that the number of faces(8090) of canonical mesh provided is not equal to that (9766) of output mesh from Metrical Photometric Tracker. I also found that the number of faces of output mesh from DECA is also 9766. Do you have any ideas on this problem?

Besides, could you please provide the FLAME parameters of each frame and canonical shape parameter in your dataset, which will make the reenactment easier (if so, no FLAME tracker is needed)?

Controlling expressions via FLAME

From issue #6, it was mentioned that for expression transfer, we need the new meshes corresponding to the new expressions. How do we obtain the meshes from desired FLAME expressions?

I've seen that length-100 FLAME vectors are used, but I can't get them to affect the expression of the output video.

How can I find the GUI and the results?

when I use the demo checkpoint to load the data in WSL ubuntu:
./build/rta --config insta.json --scene data/obama/transforms_test.json --snapshot data/obama/snapshot.msgpack
and there is no GUI show, and where is the person's 3D models or results something, can you help me please?

Question: how does the model know the head pose?

I have a probably-basic question :)

If I understand how INSTA is working correctly, then all the data used for training and inference is in the following files / array:

"depth_path": "depth/00002.png",
"exp_path": "flame/exp/00002.txt",
"eyes_path": "flame/eyes/00002.txt",
"file_path": "images/00002.png",
"mesh_path": "meshes/00002.obj",
"seg_mask_path": "seg_mask/00002.png",
"transform_matrix": [...]

My question is - where in this data are the FLAME parameters for the neck and jaw joints? I would think if anything they are contained implicitly in the mesh, but when I open the meshes they seem to always be in the same pose. What am I missing here? Thanks!

Question about Telepresence

Hello, thank for your great work. I am currently working on building an live avatar system for Telepresence. Given the live recorded RGB video and tracked 3D mesh, we create and render the avatar in real-time. I think INSTA is very suitable for our scenario. The training time is very slow. The main problem is that the RGB video and mesh is pre-recorded. Could you give me some suggestions on how to enable real-time scenarios?

I have some a few guesses.

  1. We first train a model using 2-3min RGB video and tracked mesh. As quality loss is acceptable, then we directly use the model for new-generated RGB video and mesh in real-time. Or we re-train the model using new-generated RGB video and mesh in real-time.

  2. As the 2D video is transmitted as video chunk (1s). We train a single model for each video chunk and send the model to the user. But I guess the quality will be greatly reduced as the training set uses videos of 2-3min.

Thank you for your time. Any suggestions and comment is truly appreciable.

Bad Results

Any Suggestion about How to improve it.

obama drive nf_01

nf_01_obama_2.mp4

duda drive obama

obama_transfer_2.mp4

nf_01 drive nf_01

nf_01_orig_2.mp4

Cannot load a snapshot

Hi,
since I have only a GTX 3080Ti, I have tried to load the snapshot. But unfortunately it crashes since it cannot allocate enough memory. None of the cli options does what I want. The Readme says: ./build/rta --config insta.json --scene data/obama --height 1024 --width 1024 --snapshot data/obama/snapshot.msgpack

But if I provide the --scene option it tries to load the meshes onto the gpu which makes it crash. When I omit this option, it wants the --mode option. I tried to specify --mode=nerf, but then it crashes again with. For --mode=volume is says Network config path insta.json does not exist.
"ERROR Uncaught exception: Can't set params because CPU buffer has the wrong size."

I think something is missing to only test a rendering example.
Thx
Janusch

Can I use the existing code to do expression transfer

Thanks for your solution for headless trainging, it works well.
In the Supplemental Document->B. Applications, the expression transfer results are attractive. Is it possible to do expression transfer with the existing code?

A800 havnot DLSS

CMake Warning at CMakeLists.txt:213 (message):
OptiX was not found. Neural graphics primitives will still compile and run
correctly. However, SDF training in 'raystab' and 'pathescape' modes will
be significantly slower.

Running INSTA on light stage capture data.

Hey @Zielon
Congrats on the great work and thank you for releasing the code.
I am trying to use INSTA on data captured in a light stage setting. I was able to successfully run MICA on the dataset however metrical-tracker does not perform well on the data because of the obvious reasons. What would be the best way to adopt INSTA to date captured using a light stage system? Thank you.

Regards,
Pradyumna.

FullyFusedMLP is not supported

I have the following warning when I run it, but it works, will it affect the effect?

Warning: FullyFusedMLP is not supported for the selected architecture 70. Falling back to CutlassMLP. For maximum performance, raise the target GPU architecture to 75+.

Rendering the full dataset

Really amazing work! I want to try some image-to-image translation on the final renderings to improve the final quality and add the background back in. To do this, I want to create a paired dataset of real video frames and the corresponding INSTA renderings. But, when I try to render a scene using:

./build/rta --config insta.json --scene data/obama/transforms_train.json --snapshot data/obama/snapshot.msgpack --no-gui --height 512 --width 512

and

"render_novel_trajectory": false,
"render_from_snapshot": true (I've tried false here as well)

Only the test set is rendered. Is it possible to render the full dataset, including the trainset?

Confusion about the paper

It is a great jop!
But there's something counterintuitive.
image
Why not the triangular mesh that this sampled ray hits.

How to cut the neck in half?

Your dataset cut the neck of the training set in half, but the test set is the whole neck. I wonder how you cut it?

1
3187

Accessible of the same format dataset of "yufeng"?

Hi, zielon, many thanks for your reply! I fully understand that we need everyone to agree on releasing their data.

In that case, could you please provide the processed data of "yufeng" in IMavatar? I noticed that you also trained this model in your paper. But it's datasets are not aligned with your format.

Looking forward to your reply! Many Thanks!

Best,
Xiangyue

Create data with shoulders

Hi,

I made the data with just the head and neck and it ran successfully, but how do I go further and make the data with the shoulders? Only need to modify the mask in postprocess.py?

best,
chieh

Expression transfer

Do you know how to use one person's facial expressions to drive another person's model?

Could you please release the test dataset with the same neck as train dataset

Wonderful work and nice dataset!
But I find the neck in the test dataset is different from the train dataset which will lead a bad metric results as the model is not trained with the larger neck.
So could please tell me how to get the same neck in train dataset or release the test dataset which has the same neck with the train dataset.
Thanks a lot!

Can I through load the checkpoint to keep training?

I have finished the training model , but I want to training more steps (for example max_steps=45), when I use --config insta.json --scene data/obama/transforms_test.json --snapshot data/obama/snapshot.msgpack to training , and I also reset the max_steps(max_steps=100) more than before, but it seems can not training ,the iteration always the same number:
08:37:50 INFO iteration=46 loss=0.0202575
08:37:50 INFO iteration=46 loss=0.0202575
08:37:50 INFO iteration=46 loss=0.0202575
08:37:50 INFO iteration=46 loss=0.0202575
08:37:50 INFO iteration=46 loss=0.0202575
08:37:50 INFO iteration=46 loss=0.0202575
08:37:50 INFO iteration=46 loss=0.0202575
08:37:50 INFO iteration=46 loss=0.0202575
08:37:50 INFO iteration=46 loss=0.0202575
08:37:50 INFO iteration=46 loss=0.0202575
08:37:50 INFO iteration=46 loss=0.0202575
08:37:50 INFO iteration=46 loss=0.0202575
08:37:50 INFO iteration=46 loss=0.0202575
08:37:50 INFO iteration=46 loss=0.0202575
08:37:50 INFO iteration=46 loss=0.0202575
08:37:50 INFO iteration=46 loss=0.0202575
08:37:50 INFO iteration=46 loss=0.0202575
08:37:50 INFO iteration=46 loss=0.0202575
08:37:50 INFO iteration=46 loss=0.0202575
08:37:50 INFO iteration=46 loss=0.0202575
08:37:50 INFO iteration=46 loss=0.0202575
08:37:50 INFO iteration=46 loss=0.0202575
08:37:50 INFO iteration=46 loss=0.0202575
08:37:50 INFO iteration=46 loss=0.0202575
08:37:50 INFO iteration=46 loss=0.0202575
....
and can't stop.

about training INSTA

hello,
I am trying to train an avatar using INSTA. I disable the gui option using --no-gui since there is no gui on my machine. I wonder how to set the training steps and visualize the results?

Creating own dataset issue

When creating my own dataset in the simplification.py file , I am getting an error:

ValueError: shape mismatch: value array of shape (4065,3) could not be broadcast to indexing result of shape (5023,3)

This occurs at This Line

Any ideas on how to fix it?

Getting loss = nan after some iterations

Which parameters can I change to avoid this? Any suggested parameter values?

16:35:16 INFO iteration=858 loss=0.004551
16:35:16 INFO iteration=859 loss=0.004551
16:35:16 INFO iteration=860 loss=0.004551
16:35:16 INFO iteration=861 loss=0.004551
16:35:16 INFO iteration=862 loss=0.004551
16:35:16 INFO iteration=863 loss=0.004551
16:35:16 INFO iteration=864 loss=0.004551
16:35:16 INFO iteration=865 loss=nan
16:35:16 INFO iteration=866 loss=nan
16:35:16 INFO iteration=867 loss=nan
16:35:16 INFO iteration=868 loss=nan
16:35:16 INFO iteration=869 loss=nan
16:35:16 INFO iteration=870 loss=nan
16:35:16 INFO iteration=871 loss=nan
16:35:16 INFO iteration=872 loss=nan

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.