GithubHelp home page GithubHelp logo

jittor / jnerf Goto Github PK

View Code? Open in Web Editor NEW
634.0 634.0 74.0 77.5 MB

JNeRF is a NeRF benchmark based on Jittor. JNeRF re-implemented instant-ngp and achieved same performance with original paper.

License: Apache License 2.0

Python 2.50% C++ 81.98% C 3.03% CMake 1.66% Shell 0.12% HTML 0.13% Cuda 1.34% Fortran 9.09% XSLT 0.05% JavaScript 0.05% CSS 0.04% Dockerfile 0.01%

jnerf's Introduction

Jittor: a Just-in-time(JIT) deep learning framework

Jittor Logo

Quickstart | Install | Tutorial | 简体中文

Jittor is a high-performance deep learning framework based on JIT compiling and meta-operators. The whole framework and meta-operators are compiled just-in-time. A powerful op compiler and tuner are integrated into Jittor. It allowed us to generate high-performance code with specialized for your model. Jittor also contains a wealth of high-performance model libraries, including: image recognition, detection, segmentation, generation, differentiable rendering, geometric learning, reinforcement learning, etc. .

The front-end language is Python. Module Design and Dynamic Graph Execution is used in the front-end, which is the most popular design for deeplearning framework interface. The back-end is implemented by high performance language, such as CUDA,C++.

Related Links:

The following example shows how to model a two-layer neural network step by step and train from scratch In a few lines of Python code.

import jittor as jt
from jittor import Module
from jittor import nn
import numpy as np

class Model(Module):
    def __init__(self):
        self.layer1 = nn.Linear(1, 10)
        self.relu = nn.Relu() 
        self.layer2 = nn.Linear(10, 1)
    def execute (self,x) :
        x = self.layer1(x)
        x = self.relu(x)
        x = self.layer2(x)
        return x

def get_data(n): # generate random data for training test.
    for i in range(n):
        x = np.random.rand(batch_size, 1)
        y = x*x
        yield jt.float32(x), jt.float32(y)


learning_rate = 0.1
batch_size = 50
n = 1000

model = Model()
optim = nn.SGD(model.parameters(), learning_rate)

for i,(x,y) in enumerate(get_data(n)):
    pred_y = model(x)
    dy = pred_y - y
    loss = dy * dy
    loss_mean = loss.mean()
    optim.step(loss_mean)
    print(f"step {i}, loss = {loss_mean.data.sum()}")

Contents

Quickstart

We provide some jupyter notebooks to help you quick start with Jittor.

Install

Jittor environment requirements:

OS CPU Python Compiler (Optional) GPU platform
Linux
(Ubuntu, CentOS, Arch,
UOS, KylinOS, ...)
x86
x86_64
ARM
loongson
>= 3.7 g++ >=5.4 Nvidia CUDA >= 10.0, cuDNN
or AMD ROCm >= 4.0
or Hygon DCU DTK >= 22.04
macOS
(>= 10.14 Mojave)
intel
Apple Silicon
>= 3.7 clang >= 8.0 -
Windows 10 & 11 x86_64 >= 3.8 - Nvidia CUDA >= 10.2 cuDNN

Jittor offers three ways to install: pip, docker, or manual.

Pip install

sudo apt install python3.7-dev libomp-dev
python3.7 -m pip install jittor
# or install from github(latest version)
# python3.7 -m pip install git+https://github.com/Jittor/jittor.git
python3.7 -m jittor.test.test_example

macOS install

Please first install additional dependencies with homebrew.

brew install libomp

Then you can install jittor through pip and run the example.

python3.7 -m pip install jittor
python3.7 -m jittor.test.test_example

Currently jittor only supports CPU on macOS.

Windows install

# check your python version(>=3.8)
python --version
python -m pip install jittor
# if conda is used
conda install pywin32

In Windows, jittor will automatically detect and install CUDA, please make sure your NVIDIA driver support CUDA 10.2 or above, or you can manually let jittor install CUDA for you:

python -m jittor_utils.install_cuda

Docker Install

We provide a Docker installation method to save you from configuring the environment. The Docker installation method is as follows:

# CPU only(Linux)
docker run -it --network host jittor/jittor
# CPU and CUDA(Linux)
docker run -it --network host --gpus all jittor/jittor-cuda
# CPU only(Mac and Windows)
docker run -it -p 8888:8888 jittor/jittor

manual install

We will show how to install Jittor in Ubuntu 16.04 step by step, Other Linux distributions may have similar commands.

Step 1: Choose your back-end compiler

# g++
sudo apt install g++ build-essential libomp-dev

# OR clang++-8
wget -O - https://raw.githubusercontent.com/Jittor/jittor/master/script/install_llvm.sh > /tmp/llvm.sh
bash /tmp/llvm.sh 8

Step 2: Install Python and python-dev

Jittor need python version >= 3.7.

sudo apt install python3.7 python3.7-dev

Step 3: Run Jittor

The whole framework is compiled Just-in-time. Let's install jittor via pip

git clone https://github.com/Jittor/jittor.git
sudo pip3.7 install ./jittor
export cc_path="clang++-8"
# if other compiler is used, change cc_path
# export cc_path="g++"
# export cc_path="icc"

# run a simple test
python3.7 -m jittor.test.test_example

if the test is passed, your Jittor is ready.

Optional Step 4: Enable CUDA

Using CUDA in Jittor is very simple, Just setup environment value nvcc_path

# replace this var with your nvcc location 
export nvcc_path="/usr/local/cuda/bin/nvcc" 
# run a simple cuda test
python3.7 -m jittor.test.test_cuda 

if the test is passed, your can use Jittor with CUDA by setting use_cuda flag.

import jittor as jt
jt.flags.use_cuda = 1

Optional Step 5: Test Resnet18 training

To check the integrity of Jittor, you can run Resnet18 training test. Note: 6G GPU RAM is requires in this test.

python3.7 -m jittor.test.test_resnet

if those tests are failed, please report bugs for us, and feel free to contribute ^_^

Tutorial

In the tutorial section, we will briefly explain the basic concept of Jittor.

To train your model with Jittor, there are only three main concepts you need to know:

  • Var: basic data type of jittor
  • Operations: Jittor'op is simular with numpy

Var

First, let's get started with Var. Var is the basic data type of jittor. Computation process in Jittor is asynchronous for optimization. If you want to access the data, Var.data can be used for synchronous data accessing.

import jittor as jt
a = jt.float32([1,2,3])
print (a)
print (a.data)
# Output: float32[3,]
# Output: [ 1. 2. 3.]

And we can give the variable a name.

a.name('a')
print(a.name())
# Output: a

Operations

Jittor'op is simular with numpy. Let's try some operations. We create Var a and b via operation jt.float32, and add them. Printing those variables shows they have the same shape and dtype.

import jittor as jt
a = jt.float32([1,2,3])
b = jt.float32([4,5,6])
c = a*b
print(a,b,c)
print(type(a), type(b), type(c))
# Output: float32[3,] float32[3,] float32[3,]
# Output: <class 'jittor_core.Var'> <class 'jittor_core.Var'> <class 'jittor_core.Var'>

Beside that, All the operators we used jt.xxx(Var, ...) have alias Var.xxx(...). For example:

c.max() # alias of jt.max(c)
c.add(a) # alias of jt.add(c, a)
c.min(keepdims=True) # alias of jt.min(c, keepdims=True)

if you want to know all the operation which Jittor supports. try help(jt.ops). All the operation you found in jt.ops.xxx, can be used via alias jt.xxx.

help(jt.ops)
# Output:
#   abs(x: core.Var) -> core.Var
#   add(x: core.Var, y: core.Var) -> core.Var
#   array(data: array) -> core.Var
#   binary(x: core.Var, y: core.Var, op: str) -> core.Var
#   ......

More

If you want to know more about Jittor, please check out the notebooks below:

Those notebooks can be started in your own computer by python3.7 -m jittor.notebook

Contributing

Jittor is still young. It may contain bugs and issues. Please report them in our bug track system. Contributions are welcome. Besides, if you have any ideas about Jittor, please let us know.

You can help Jittor in the following ways:

  • Citing Jittor in your paper
  • recommend Jittor to your friends
  • Contributing code
  • Contributed tutorials and documentation
  • File an issue
  • Answer jittor related questions
  • Light up the stars
  • Keep an eye on jittor
  • ......

Contact Us

Website: http://cg.cs.tsinghua.edu.cn/jittor/

Email: [email protected]

File an issue: https://github.com/Jittor/jittor/issues

QQ Group: 836860279

The Team

Jittor is currently maintained by the Tsinghua CSCG Group. If you are also interested in Jittor and want to improve it, Please join us!

Citation

@article{hu2020jittor,
  title={Jittor: a novel deep learning framework with meta-operators and unified graph execution},
  author={Hu, Shi-Min and Liang, Dun and Yang, Guo-Ye and Yang, Guo-Wei and Zhou, Wen-Yang},
  journal={Science China Information Sciences},
  volume={63},
  number={222103},
  pages={1--21},
  year={2020}
}

License

Jittor is Apache 2.0 licensed, as found in the LICENSE.txt file.

jnerf's People

Contributors

cjld avatar exusial avatar fredzel2020 avatar gword avatar iamncj avatar ldyang694 avatar ming4586 avatar sleeplessai avatar yannnnnnnnnnnn avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

jnerf's Issues

Type Error:argument of type 'NoneType' is not iterable

Loading config from: ./projects/ngp/configs/ngp_base.py
load train data
Traceback (most recent call last):
File "/home/featurize/JNeRF/python/jnerf/utils/registry.py", line 33, in build_from_cfg
module = obj_cls(**args)
File "/home/featurize/JNeRF/python/jnerf/dataset/dataset.py", line 111, in init
self.load_data()
File "/home/featurize/JNeRF/python/jnerf/dataset/dataset.py", line 150, in load_data
if 'h' in json_data:
TypeError: argument of type 'NoneType' is not iterable

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/featurize/JNeRF/tools/run_net.py", line 54, in
main()
File "/home/featurize/JNeRF/tools/run_net.py", line 44, in main
runner = Runner()
File "/home/featurize/JNeRF/python/jnerf/runner/runner.py", line 24, in init
self.dataset["train"] = build_from_cfg(self.cfg.dataset.train, DATASETS)
File "/home/featurize/JNeRF/python/jnerf/utils/registry.py", line 37, in build_from_cfg
raise TypeError(e)
TypeError: <class 'jnerf.dataset.dataset.NerfDataset'>.argument of type 'NoneType' is not iterable

could you help me?

question about debug

Hi, all,

As the main entry point: run_net.py applies threading Runner(), which leads to some difficulties in debugging (e.g. using PyCharm), is it possible to bypass the threading approach? And how could we modify the code?

Thanks~

有关import jnerf的问题

你好,我已经安装jittor,且可以运行example 可是我import jnerf 却出现错误
(/cloud/conda) ➜ work python
Python 3.9.12 (main, Jun 1 2022, 11:38:51)
[GCC 7.5.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.

import jnerf
Traceback (most recent call last):
File "", line 1, in
ModuleNotFoundError: No module named 'jnerf'
请问如何解决

Failed to import in m1max

(jnerf) ➜  python git:(master) python                    
Python 3.8.13 | packaged by conda-forge | (default, Mar 25 2022, 06:05:16) 
[Clang 12.0.1 ] on darwin
Type "help", "copyright", "credits" or "license" for more information.
import >>> import jnerf
[i 0813 22:32:46.508897 04 lock.py:85] Create lock file:/Users/me/.cache/jittor/jt1.3.5/g++13.1.6/py3.8.13/macOS-12.0.1-ax07/AppleM1Max/jittor.lock
[i 0813 22:32:46.521387 04 compiler.py:955] Jittor(1.3.5.7) src: /opt/homebrew/Caskroom/miniforge/base/envs/jnerf/lib/python3.8/site-packages/jittor
[i 0813 22:32:46.543845 04 compiler.py:956] g++ at /usr/bin/g++(13.1.6)
[i 0813 22:32:46.543987 04 compiler.py:957] cache_path: /Users/me/.cache/jittor/jt1.3.5/g++13.1.6/py3.8.13/macOS-12.0.1-ax07/AppleM1Max/stable
[i 0813 22:32:46.909862 04 compiler.py:34] Create cache dir: /Users/me/.cache/jittor/jt1.3.5/g++13.1.6/py3.8.13/macOS-12.0.1-ax07/AppleM1Max/stable/jit
[i 0813 22:32:46.910039 04 compiler.py:34] Create cache dir: /Users/me/.cache/jittor/jt1.3.5/g++13.1.6/py3.8.13/macOS-12.0.1-ax07/AppleM1Max/stable/obj_files
[i 0813 22:32:46.910114 04 compiler.py:34] Create cache dir: /Users/me/.cache/jittor/jt1.3.5/g++13.1.6/py3.8.13/macOS-12.0.1-ax07/AppleM1Max/stable/gen
[i 0813 22:32:46.910180 04 compiler.py:34] Create cache dir: /Users/me/.cache/jittor/jt1.3.5/g++13.1.6/py3.8.13/macOS-12.0.1-ax07/AppleM1Max/stable/tmp
[i 0813 22:32:46.910262 04 compiler.py:34] Create cache dir: /Users/me/.cache/jittor/jt1.3.5/g++13.1.6/py3.8.13/macOS-12.0.1-ax07/AppleM1Max/stable/checkpoints
[i 0813 22:32:59.389819 00 __init__.py:227] Total mem: 64.00GB, using 16 procs for compiling.
Compiling jittor_core(146/146) used: 27.826s eta: 0.000s
[i 0813 22:33:27.992004 00 jit_compiler.cc:28] Load cc_path: /usr/bin/g++
[i 0813 22:33:29.038271 00 compile_extern.py:517] mpicc not found, distribution disabled.
[w 0813 22:33:29.038413 00 compile_extern.py:589] MKL install failed, msg:Not found onednn, please install it by the command 'brew install onednn'
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Users/me/source/py/JNeRF/python/jnerf/__init__.py", line 2, in <module>
    from . import models
  File "/Users/me/source/py/JNeRF/python/jnerf/models/__init__.py", line 1, in <module>
    from . import networks
  File "/Users/me/source/py/JNeRF/python/jnerf/models/networks/__init__.py", line 1, in <module>
    from . import ngp_network
  File "/Users/me/source/py/JNeRF/python/jnerf/models/networks/ngp_network.py", line 7, in <module>
    from jnerf.ops.code_ops.fully_fused_mlp import FullyFusedMlp_weight
  File "/Users/me/source/py/JNeRF/python/jnerf/ops/code_ops/fully_fused_mlp.py", line 6, in <module>
    from jnerf.ops.code_ops.global_vars import global_headers, proj_options
  File "/Users/me/source/py/JNeRF/python/jnerf/ops/code_ops/global_vars.py", line 3, in <module>
    jt.flags.use_cuda = 1
RuntimeError: Wrong inputs arguments, Please refer to examples(help(jt.Flags.use_cuda)).

Types of your inputs are:
 self	= Flags,
 arg	= int,

The function declarations are:
 void _set_use_cuda(int v)
 void _set_use_cuda(bool v)

Failed reason:[f 0813 22:33:33.119686 00 cuda_flags.cc:37] Check failed: value==0  No CUDA found.

运行时发生错误

您好,
我在运行示例代码时遇到了这个问题,请问应该如何解决呢,谢谢
批注 2022-07-06 100031
批注 2022-07-06 100119

Data split on NeRF_synthetic

On training, these two lines confuse me.

load train data
100%|██████████████████████████████████████████████████████████████████████████████████████| 200/200 [00:03<00:00, 63.65it/s]
load val data
100%|████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:01<00:00,  7.09it/s]

Why do you have 200 training images and 10 val images? The original data has 100 training and 100 validation. What set do you use to get the reported 36.41 PSNR? The original paper uses only the 100 training images, not 100+100 val images

Wrong mesh model on some cases

Like drums: the render video and origin.ply like nice, but color.ply is wrong.
Similiar errors occur in materials & ficus.

Screenshot_vlc_20230308201113
Screenshot_select-area_20230308201237

FFMLP(fp-16) not compatable with Jittor=13.8.5

following the instructions in ReadMe.md , I got this when running the ngp_fox test:

/hy-tmp/JNeRF-master/python/jnerf/runner/runner.py:193: RuntimeWarning: invalid value encountered in cast
  ndarr = (img*255+0.5).clip(0, 255).astype('uint8')

The output image in log folder is all black.
Some loss value is NAN.

my hardware environment:
Intel(R) Xeon(R) CPU E5-2686 v4
RTX3090-24G

The training and output is all correct when I set using FP_16 = False in config file, which make the code use the MLP from pytorch.nn instead of FMLP.

I tried to fix it by downgrading my python to 3.8, jittor=1.3.4.13(exactly the version in requirements.txt),without running python setup.py, and everything goes fine.

If you encounter the same problem, try:

  1. downgrade your python to 3.8, jittor=1.3.4.13
  2. do not run python setup.py again, because it will upgrade your jittor package to the latest version automaticly.

I hope you guys fix it ASAP.

How can I join to contribute code?

JNeRF is great work! It can be an new benchmark in neural rendering community.

readme gives an develop plan. So how can others join to co-develop?

Much thanks!

相机参数设定

您好,请问相机参数如何设定呢,可以提供参数整定的文章吗,谢谢

work On WSL

您好,我在WSL2上运行ngp_base.py代码,遇到了这样的报错,请问应该如何处理,谢谢
批注 2022-07-06 222936

when I start training and meet this problem

RuntimeError: Wrong inputs arguments, Please refer to examples(help(jt.sync)).

Types of your inputs are:
self = Var,
args = (),

The function declarations are:
void sync(bool device_sync = false, bool weak_sync = true)

Failed reason:[f 1119 23:18:20.776426 96 op.cc:182] Check failed: flags.get(NodeFlags::_cpu) Something wrong... Could you please report this issue?

can we use nerf for llff dataset?

How can we use the llff dataset as origin nerf for some results, I wonder if some interfaces for the llff input, since I'm not sure whether there can be a conving transform for llff poses to the transforms_train?

fatal error LNK1120: 1 unresolved externals

系统:Windows10
命令:python jnerf/tools/run_net.py --config-file configs/jnerf/ngp_llff.py
环境:Python 3.8.13

llff格式的文件可以在nerf-pytorch中正常训练,在jnerf中编译报错

具体报错信息如下:

image

image

`
Loading config from: configs/jnerf/ngp_llff.py
HOLDOUT view is 12
Auto LLFF holdout, 8
create data/fern\split.json
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 17/17 [00:00<00:00, 103.56it/s]
HOLDOUT view is 12
Auto LLFF holdout, 8
0%| | 0/3 [00:00<?, ?it/s]
Compiling Operators(11/11) used: 48.3s eta: 0s
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:48<00:00, 16.17s/it]
Warning:Default max value of NeRF dataset's aabb_scale is 16, but now is 64.
You can increase this max_aabb_scale limit by factors of 2 by incrementing NERF_CASCADES. We automatically help you set NERF_CASCADES to 7, which may result in slower speeds.
0%| | 0/40000 [00:00<?, ?it/s]
Compiling Operators(40/40) used: 163s eta: 0s
[e 1010 18:34:16.263000 12 log.cc:565] cl : Command line warning D9025 : overriding '/EHa' with '/EHs'
code__IN_SIZE_2__in0_dim_1__in0_type_float32__in1_dim_1__in1_type_int32__OUT_SIZE_2__out0____hash_f79e8ce28b3203ea_op.cc
cl : Command line warning D9025 : overriding '/EHa' with '/EHs'
code__IN_SIZE_2__in0_dim_1__in0_type_float32__in1_dim_1__in1_type_int32__OUT_SIZE_2__out0____hash_f79e8ce28b3203ea_op.cc
code__IN_SIZE_2__in0_dim_1__in0_type_float32__in1_dim_1__in1_type_int32__OUT_SIZE_2__out0____hash_f79e8ce28b3203ea_op.cc
cl : Command line warning D9025 : overriding '/EHa' with '/EHs'
cl : Command line warning D9025 : overriding '/EHa' with '/EHs'
Creating library C:\Users\Shijunfeng.cache\jittor\jt1.3.5\cl\py3.8.13\Windows-10-10.xf7\AMDRyzen73700Xx25\default\cu11.2.67\jit\code__IN_SIZE_2__in0_dim_1__in0_type_float32__in1_dim_1__in1_type_int32__OUT_SIZE_2__out0____has
h_f79e8ce28b3203ea_op.lib and object C:\Users\Shijunfeng.cache\jittor\jt1.3.5\cl\py3.8.13\Windows-10-10.xf7\AMDRyzen73700Xx25\default\cu11.2.67\jit\code__IN_SIZE_2__in0_dim_1__in0_type_float32__in1_dim_1__in1_type_int32__OUT_SI
ZE_2__out0____hash_f79e8ce28b3203ea_op.exp
tmpxft_00006864_00000000-19_code__IN_SIZE_2__in0_dim_1__in0_type_float32__in1_dim_1__in1_type_int32__OUT_SIZE_2__out0____hash_f79e8ce28b3203ea_op.obj : error LNK2019: unresolved external symbol "struct pcg32 jittor::rng" (?rng@j
ittor@@3Upcg32@@A) referenced in function "public: void __cdecl jittor::CodeOp::jit_run(void)" (?jit_run@CodeOp@jittor@@QEAAXXZ)
C:\Users\Shijunfeng.cache\jittor\jt1.3.5\cl\py3.8.13\Windows-10-10.xf7\AMDRyzen73700Xx25\default\cu11.2.67\jit\code__IN_SIZE_2__in0_dim_1__in0_type_float32__in1_dim_1__in1_type_int32__OUT_SIZE_2__out0____hash_f79e8ce28b3203ea_o
p.dll : fatal error LNK1120: 1 unresolved externals

0%| | 0/40000 [02:49<?, ?it/s]
Traceback (most recent call last):
File "jnerf/tools/run_net.py", line 75, in
main()
File "jnerf/tools/run_net.py", line 66, in main
runner.train()
File "c:\users\shijunfeng\desktop\3drebuild\jnerf\python\jnerf\runner\runner.py", line 70, in train
pos, dir = self.sampler.sample(img_ids, rays_o, rays_d, is_training=True)
File "c:\users\shijunfeng\desktop\3drebuild\jnerf\python\jnerf\models\samplers\density_grid_sampler\density_grid_sampler.py", line 139, in sample
self.update_density_grid()
File "c:\users\shijunfeng\desktop\3drebuild\jnerf\python\jnerf\models\samplers\density_grid_sampler\density_grid_sampler.py", line 259, in update_density_grid
self.update_density_grid_nerf(
File "c:\users\shijunfeng\desktop\3drebuild\jnerf\python\jnerf\models\samplers\density_grid_sampler\density_grid_sampler.py", line 219, in update_density_grid_nerf
density_grid_positions_uniform, density_grid_indices_uniform = self.generate_grid_samples_nerf_nonuniform.execute(
File "c:\users\shijunfeng\desktop\3drebuild\jnerf\python\jnerf\models\samplers\density_grid_sampler\generate_grid_samples_nerf_nonuniform.py", line 50, in execute
output[0].sync()
RuntimeError: Wrong inputs arguments, Please refer to examples(help(jt.sync)).

Types of your inputs are:
self = Var,
args = (),

The function declarations are:
void sync(bool device_sync = false, bool weak_sync = true)

Failed reason:[f 1010 18:34:16.266000 12 parallel_compiler.cc:329] Error happend during compilation:
[Error] source file location:C:\Users\Shijunfeng.cache\jittor\jt1.3.5\cl\py3.8.13\Windows-10-10.xf7\AMDRyzen73700Xx25\default\cu11.2.67\jit\code__IN_SIZE_2__in0_dim_1__in0_type_float32__in1_dim_1__in1_type_int32__OUT_SIZE_2__o
ut0____hash_f79e8ce28b3203ea_op.cc
Compile operator(0/1)failed:Op(833:1:2:2:i2:o2:s0,code->...)

Reason: [f 1010 18:34:16.264000 12 log.cc:608] Check failed ret(2) == 0(0) Run cmd failed: "C:\Users\Shijunfeng.cache\jittor\jtcuda\cuda11.2_cudnn8_win\bin\nvcc.exe" "C:\Users\Shijunfeng.cache\jittor\jt1.3.5\cl\py3.8.13\Window
s-10-10.xf7\AMDRyzen73700Xx25\default\cu11.2.67\jit\code__IN_SIZE_2__in0_dim_1__in0_type_float32__in1_dim_1__in1_type_int32__OUT_SIZE_2__out0____hash_f79e8ce28b3203ea_op.cc" -shared -L"c:\users\shijunfeng\anaconda3\e
nvs\pytorch\libs" -lpython38 -Xcompiler -EHa -Xcompiler -MD -Xcompiler -utf-8 -I"C:\Users\Shijunfeng.cache\jittor\msvc\VC\include" -I"C:\Users\Shijunfeng.cache\jittor\msvc\win10_kits\include\ucrt" -I"C:\Users\Shijunfeng.ca
che\jittor\msvc\win10_kits\include\shared" -I"C:\Users\Shijunfeng.cache\jittor\msvc\win10_kits\include\um" -DNOMINMAX -L"C:\Users\Shijunfeng.cache\jittor\msvc\VC\lib" -L"C:\Users\Shijunfeng.cache\jittor\msvc\win10_kits\lib\u
m\x64" -L"C:\Users\Shijunfeng.cache\jittor\msvc\win10_kits\lib\ucrt\x64" -I"c:\users\shijunfeng\anaconda3\envs\pytorch\lib\site-packages\jittor\src" -I"c:\users\shijunfeng\anaconda3\envs\pytorch\include" -DHAS_CUDA -DIS_CUDA -
I"C:\Users\Shijunfeng.cache\jittor\jtcuda\cuda11.2_cudnn8_win\include" -I"c:\users\shijunfeng\anaconda3\envs\pytorch\lib\site-packages\jittor\extern\cuda\inc" -lcudart -L"C:\Users\Shijunfeng.cache\jittor\jtcuda\cuda11.2_cudnn
8_win\lib\x64" -L"C:\Users\Shijunfeng.cache\jittor\jtcuda\cuda11.2_cudnn8_win\bin" -I"C:\Users\Shijunfeng.cache\jittor\jt1.3.5\cl\py3.8.13\Windows-10-10.xf7\AMDRyzen73700Xx25\default\cu11.2.67" -L"C:\Users\Shijunfeng.cache\j
ittor\jt1.3.5\cl\py3.8.13\Windows-10-10.xf7\AMDRyzen73700Xx25\default\cu11.2.67" -L"C:\Users\Shijunfeng.cache\jittor\jt1.3.5\cl\py3.8.13\Windows-10-10.xf7\AMDRyzen73700Xx25\default" -l"jit_utils_core.cp38-win_amd64" -l"jittor
_core.cp38-win_amd64" -x cu --cudart=shared -ccbin="C:\Users\Shijunfeng.cache\jittor\msvc\VC_____\bin\cl.exe" --use_fast_math -w -I"c:\users\shijunfeng\anaconda3\envs\pytorch\lib\site-packages\jittor\extern/cuda/inc" -
arch=compute_75 -code=sm_75 -Ic:\users\shijunfeng\desktop\3drebuild\jnerf\python\jnerf\models\position_encoders\hash_encoder\op_header -Ic:\users\shijunfeng\desktop\3drebuild\jnerf\python\jnerf\models\position_encoders\sh_e
ncoder\op_header -Ic:\users\shijunfeng\desktop\3drebuild\jnerf\python\jnerf\models\samplers\density_grid_sampler\op_header -Ic:\users\shijunfeng\desktop\3drebuild\jnerf\python\jnerf\ops\code_ops..\op_include/eigen -Ic:\user
s\shijunfeng\desktop\3drebuild\jnerf\python\jnerf\ops\code_ops..\op_include/include -Ic:\users\shijunfeng\desktop\3drebuild\jnerf\python\jnerf\ops\code_ops..\op_include/pcg32 -Ic:\users\shijunfeng\desktop\3drebuild\jnerf\pytho
n\jnerf\ops\code_ops..\op_include/../op_header -DGLOBAL_VAR --extended-lambda --expt-relaxed-constexpr -o "C:\Users\Shijunfeng.cache\jittor\jt1.3.5\cl\py3.8.13\Windows-10-10.xf7\AMDRyzen73700Xx25\default\cu11.2.67\jit\code__I
N_SIZE_2__in0_dim_1__in0_type_float32__in1_dim_1__in1_type_int32__OUT_SIZE_2__out0____hash_f79e8ce28b3203ea_op.dll" -Xlinker -EXPORT:"?jit_run@CodeOp@jittor@@QEAAXXZ"

`

关于生成mesh失败的问题

[Open3D WARNING] Read PLY failed: number of vertex <= 0.
Traceback (most recent call last):
File "tools/extract_mesh.py", line 161, in
mesh()
File "tools/extract_mesh.py", line 94, in mesh
max_cluster_idx = np.argmax(count)
File "<array_function internals>", line 6, in argmax
File "/home/ysz/anaconda3/envs/jnerf/lib/python3.7/site-packages/numpy/core/fromnumeric.py", line 1195, in argmax
return _wrapfunc(a, 'argmax', axis=axis, out=out)
File "/home/ysz/anaconda3/envs/jnerf/lib/python3.7/site-packages/numpy/core/fromnumeric.py", line 54, in _wrapfunc
return _wrapit(obj, method, *args, **kwds)
File "/home/ysz/anaconda3/envs/jnerf/lib/python3.7/site-packages/numpy/core/fromnumeric.py", line 43, in _wrapit
result = getattr(asarray(obj), method)(*args, **kwds)
ValueError: attempt to get argmax of an empty sequence

按照官方例子运行extract_mash.py出现上述问题,这是什么导致的?open3d的版本吗?

failed to run on the fox dataset

Running the code on the fox dataset is failing because there is no defined val dataset provided.
Any idea about a solution for this?
thanks

关于.o文件

发现在python/jnerf/ops/code_ops/op_header文件夹下有两个.o文件fully_fused_mlp_function.o和calc_rgb.o,想知道这两个函数的源码会开源吗?如果开源的话,什么时候开源?

add roadmap

can you add roadmap like this,then we can konw the detail plan and schedule

extract_mesh.py segfault

what reason results in it?
python tools/extract_mesh.py --config-file ./projects/ngp/configs/ngp_base.py
[i 1211 23:31:46.072899 28 compiler.py:955] Jittor(1.3.6.3) src: /home/rqq/.local/lib/python3.7/site-packages/jittor
[i 1211 23:31:46.084297 28 compiler.py:956] g++ at /usr/bin/g++(5.4.0)
[i 1211 23:31:46.084554 28 compiler.py:957] cache_path: /home/rqq/.cache/jittor/jt1.3.6/g++5.4.0/py3.7.15/Linux-4.15.0-1x3d/IntelRXeonRSilxe9/default
[i 1211 23:31:50.941330 28 install_cuda.py:93] cuda_driver_version: [10, 2]
[i 1211 23:31:50.942246 28 install_cuda.py:81] restart /data/rqq/envs/ngp/bin/python ['tools/extract_mesh.py', '--config-file', './projects/ngp/configs/ngp_base.py']
[i 1211 23:31:51.968684 76 compiler.py:955] Jittor(1.3.6.3) src: /home/rqq/.local/lib/python3.7/site-packages/jittor
[i 1211 23:31:51.979015 76 compiler.py:956] g++ at /usr/bin/g++(5.4.0)
[i 1211 23:31:51.979376 76 compiler.py:957] cache_path: /home/rqq/.cache/jittor/jt1.3.6/g++5.4.0/py3.7.15/Linux-4.15.0-1x3d/IntelRXeonRSilxe9/default
[i 1211 23:31:57.551777 76 install_cuda.py:93] cuda_driver_version: [10, 2]
[i 1211 23:31:57.562057 76 init.py:411] Found /home/rqq/.cache/jittor/jtcuda/cuda10.2_cudnn7_linux/bin/nvcc(10.2.89) at /home/rqq/.cache/jittor/jtcuda/cuda10.2_cudnn7_linux/bin/nvcc.
[i 1211 23:31:57.638187 76 init.py:411] Found gdb(7.11.1) at /usr/bin/gdb.
[i 1211 23:31:57.650179 76 init.py:411] Found addr2line(2.26.1) at /usr/bin/addr2line.
[i 1211 23:32:05.867694 76 compiler.py:1010] cuda key:cu10.2.89_sm_70
Caught segfault at address 0, thread_name: '', flush log...
[bt] Execution path:
[bt] #1 /home/rqq/.cache/jittor/jt1.3.6/g++5.4.0/py3.7.15/Linux-4.15.0-1x3d/IntelRXeonRSilxe9/default/jit_utils_core.cpython-37m-x86_64-linux-gnu.so(ZN6jittor18segfault_sigactionEiP9siginfo_tPv+0xa24) [0x7fd97342a294]
[bt] #2 /lib/x86_64-linux-gnu/libc.so.6(+0x354c0) [0x7fd97f8404c0]
[bt] #3 /lib/x86_64-linux-gnu/libc.so.6(cfree+0x22) [0x7fd97f88f562]
[bt] #4 /data/rqq/envs/ngp/lib/python3.7/site-packages/open3d/open3d.cpython-37m-x86_64-linux-gnu.so(inflateReset2+0x5f) [0x7fd97ee0cbdf]
[bt] #5 /data/rqq/envs/ngp/bin/../lib/libz.so.1(inflateInit2
+0x88) [0x7fd97f2d4a98]
[bt] #6 /data/rqq/envs/ngp/lib/python3.7/lib-dynload/zlib.cpython-37m-x86_64-linux-gnu.so(+0x366f) [0x7fd976d9266f]
[bt] #7 /data/rqq/envs/ngp/bin/python(_PyMethodDef_RawFastCallDict+0x169) [0x4bb779]
[bt] #8 /data/rqq/envs/ngp/bin/python(PyCFunction_Call+0x3a) [0x4d3cca]
[bt] #9 /data/rqq/envs/ngp/bin/python(_PyEval_EvalFrameDefault+0x54fe) [0x4b7b6e]
[bt] #10 /data/rqq/envs/ngp/bin/python(_PyEval_EvalCodeWithName+0x201) [0x4b1411]
[bt] #11 /data/rqq/envs/ngp/bin/python(_PyFunction_FastCallKeywords+0x29c) [0x4c50ec]
[bt] #12 /data/rqq/envs/ngp/bin/python() [0x4ba23f]
[bt] #13 /data/rqq/envs/ngp/bin/python(_PyEval_EvalFrameDefault+0x15d1) [0x4b3c41]
[bt] #14 /data/rqq/envs/ngp/bin/python(_PyEval_EvalCodeWithName+0x201) [0x4b1411]
[bt] #15 /data/rqq/envs/ngp/bin/python(_PyFunction_FastCallDict+0x2d7) [0x4cbc37]
Segfault, exit

code fails on aabb_scale = 16

Hi, I'm running the code with
aabb_scale = 16 and
"w": 960.0,
"h": 540.0,
and the code is failing with following message:

Screenshot from 2022-06-07 17-42-07

Running on NVIDIA GeFORCE RTX 2080TI

thanks for your help

这几个功能在JNeRF中有没有?能不能实现到一起?

1)减少强制的相机姿态的等参数,在网络内部(可增加一个自网络)来处理。
2)在重构生成时,能不能增加图片或其他标志点表示条件。(比如生成一团花,加入一个棍子,那么棍子部分不应该有花穿透)
3)能不能把instant-ngp和Plenoxels等关于计算速度相关的优化,融合到一个模型中?
4)能不能将msff等表达动态对象的模型融合到一起,构造的时候吗,给定:条件参考图片,时间步,随机数等来产生含时动态的。

运行时错误:Execute fused operator(1/2) failed

首先感谢你们的JNeRF工作 !@Gword
我们近日基于你们的JNeRF实现做了进一步的开发,代码运行时出现了 Execute fused operator failed的运行时错误,关键错误提示如下:
截屏2022-07-03 19 54 22
更具体地,我们发现,当且仅当调用JNeRF实现中的render_test函数进行测试图片的渲染时,该函数中的self.sampler.sample一句会报错。其他地方(如NeRF训练时)执行采样操作,无任何问题。
报错信息提示十分地模糊,只是"found something wrong", 导致我们在企图修改时,完全无从下手。
报错信息还提示”pcg32.h“未被找到,我们尝试该头文件将其加入至相关路径下,发现错误未排除,且报错信息又提示”ray_sampler.h“未包括。

请JNeRF团队评估一下这可能是什么地方的问题,并给我们一些建议,谢谢!

postions sampled are not matched with aabb_scale

Hi there! When I visualize the 'pos' sampled by the density_grid_sampler in line 70 in runner.py, I found that whatever aabb_scale is, pos ranges in [0, 1]. And when I set 'const_dt=True', pos ranges even smaller when aabb_scale is larger! It seems that there is something wrong with the aabb_scale or the density_grid_sampler. @Gword

Here are some pictures. The yellow cube is a unit cube. The white points are rays_o. The red/black points are pos.
vasedeck_pos_aabbscale1
👆aabb_scale=1

vasedeck_pos_aabbscale2
👆aabb_scale=2

vasedeck_pos_aabbscale16
👆aabb_scale=16

vasedeck_pos_aabbscale1_constdt
👆aabb_scale=1, constdt=True

vasedeck_pos_aabbscale8_constdt
👆aabb_scale=8, constdt=True

windows wsl: RuntimeError: Wrong inputs arguments, Please refer to examples(help(jt.ops.array))

JNeRF-master is download from here

fengxiang@LAPTOP-4ARTIG9C:/mnt/d/JNeRF/JNeRF-master$ python3 tools/run_net.py --config-file ./projects/ngp/configs/ngp_base.py
[i 0922 22:32:25.090768 68 compiler.py:955] Jittor(1.3.5.16) src: /home/fengxiang/.local/lib/python3.8/site-packages/jittor
[i 0922 22:32:25.092662 68 compiler.py:956] g++ at /usr/bin/g++(9.4.0)
[i 0922 22:32:25.092723 68 compiler.py:957] cache_path: /home/fengxiang/.cache/jittor/jt1.3.5/g++9.4.0/py3.8.10/Linux-5.10.16.xca/11thGenIntelRCxaa/default
[i 0922 22:32:26.438366 68 install_cuda.py:88] cuda_driver_version: [11, 6]
[i 0922 22:32:26.447372 68 init.py:411] Found /home/fengxiang/.cache/jittor/jtcuda/cuda11.2_cudnn8_linux/bin/nvcc(11.2.152) at /home/fengxiang/.cache/jittor/jtcuda/cuda11.2_cudnn8_linux/bin/nvcc.
[i 0922 22:32:26.475327 68 init.py:411] Found addr2line(2.34) at /usr/bin/addr2line.
[i 0922 22:32:26.662258 68 compiler.py:1010] cuda key:cu11.2.152
[i 0922 22:32:26.872002 68 init.py:227] Total mem: 12.28GB, using 4 procs for compiling.
[i 0922 22:32:26.949771 68 jit_compiler.cc:28] Load cc_path: /usr/bin/g++
[i 0922 22:32:27.730311 68 init.cc:62] Found cuda archs: [86,]
[i 0922 22:32:27.814026 68 compile_extern.py:517] mpicc not found, distribution disabled.
[i 0922 22:32:27.846909 68 compile_extern.py:30] found /home/fengxiang/.cache/jittor/jtcuda/cuda11.2_cudnn8_linux/include/cublas.h
[i 0922 22:32:27.856165 68 compile_extern.py:30] found /home/fengxiang/.cache/jittor/jtcuda/cuda11.2_cudnn8_linux/lib64/libcublas.so
[i 0922 22:32:27.856235 68 compile_extern.py:30] found /home/fengxiang/.cache/jittor/jtcuda/cuda11.2_cudnn8_linux/lib64/libcublasLt.so.11
[i 0922 22:32:28.354437 68 compile_extern.py:30] found /home/fengxiang/.cache/jittor/jtcuda/cuda11.2_cudnn8_linux/include/cudnn.h
[i 0922 22:32:28.369732 68 compile_extern.py:30] found /home/fengxiang/.cache/jittor/jtcuda/cuda11.2_cudnn8_linux/lib64/libcudnn.so.8
[i 0922 22:32:28.369803 68 compile_extern.py:30] found /home/fengxiang/.cache/jittor/jtcuda/cuda11.2_cudnn8_linux/lib64/libcudnn_ops_infer.so.8
[i 0922 22:32:28.372593 68 compile_extern.py:30] found /home/fengxiang/.cache/jittor/jtcuda/cuda11.2_cudnn8_linux/lib64/libcudnn_ops_train.so.8
[i 0922 22:32:28.372934 68 compile_extern.py:30] found /home/fengxiang/.cache/jittor/jtcuda/cuda11.2_cudnn8_linux/lib64/libcudnn_cnn_infer.so.8
[i 0922 22:32:28.385145 68 compile_extern.py:30] found /home/fengxiang/.cache/jittor/jtcuda/cuda11.2_cudnn8_linux/lib64/libcudnn_cnn_train.so.8
[i 0922 22:32:28.863379 68 compile_extern.py:30] found /home/fengxiang/.cache/jittor/jtcuda/cuda11.2_cudnn8_linux/include/curand.h
[i 0922 22:32:28.883515 68 compile_extern.py:30] found /home/fengxiang/.cache/jittor/jtcuda/cuda11.2_cudnn8_linux/lib64/libcurand.so
[i 0922 22:32:28.896946 68 compile_extern.py:30] found /home/fengxiang/.cache/jittor/jtcuda/cuda11.2_cudnn8_linux/include/cufft.h
[i 0922 22:32:28.913695 68 compile_extern.py:30] found /home/fengxiang/.cache/jittor/jtcuda/cuda11.2_cudnn8_linux/lib64/libcufft.so
[i 0922 22:32:29.007277 68 cuda_flags.cc:32] CUDA enabled.
[i 0922 22:32:29.015585 68 cuda_flags.cc:32] CUDA enabled.
[i 0922 22:32:29.016354 68 cuda_flags.cc:32] CUDA enabled.
[i 0922 22:32:29.016460 68 cuda_flags.cc:32] CUDA enabled.
[i 0922 22:32:29.016566 68 cuda_flags.cc:32] CUDA enabled.
[i 0922 22:32:29.016684 68 cuda_flags.cc:32] CUDA enabled.
[i 0922 22:32:29.016779 68 cuda_flags.cc:32] CUDA enabled.
[i 0922 22:32:29.016874 68 cuda_flags.cc:32] CUDA enabled.
[i 0922 22:32:29.016973 68 cuda_flags.cc:32] CUDA enabled.
[i 0922 22:32:29.017173 68 cuda_flags.cc:32] CUDA enabled.
[i 0922 22:32:29.018092 68 cuda_flags.cc:32] CUDA enabled.
Loading config from: ./projects/ngp/configs/ngp_base.py
load train data
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 200/200 [00:06<00:00, 30.67it/s]Traceback (most recent call last):
File "tools/run_net.py", line 54, in
main()
File "tools/run_net.py", line 44, in main
runner = Runner()
File "/home/fengxiang/JNeRF/python/jnerf/runner/runner.py", line 24, in init
self.dataset["train"] = build_from_cfg(self.cfg.dataset.train, DATASETS)
File "/home/fengxiang/JNeRF/python/jnerf/utils/registry.py", line 33, in build_from_cfg
module = obj_cls(**args)
File "/home/fengxiang/JNeRF/python/jnerf/dataset/dataset.py", line 48, in init
self.load_data()
File "/home/fengxiang/JNeRF/python/jnerf/dataset/dataset.py", line 157, in load_data
self.image_data=jt.array(self.image_data)
File "/home/fengxiang/.local/lib/python3.8/site-packages/jittor/init.py", line 323, in array
ret = ops.array(data)
RuntimeError: Wrong inputs arguments, Please refer to examples(help(jt.ops.array)).

Types of your inputs are:
self = module,
args = (list, ),

The function declarations are:
VarHolder* array__(PyObject* obj)

Failed reason:[f 0922 22:32:37.806230 68 helper_cuda.h:128] CUDA error at /home/fengxiang/.local/lib/python3.8/site-packages/jittor/src/mem/allocator/cuda_host_allocator.cc:22 code=2( cudaErrorMemoryAllocation ) cudaMallocHost(&ptr, size)

output after execution?

'python tools/run_net.py --config-file ./projects/ngp/configs/ngp_base.py --task test', after successful execution with lego, where can I see the output of the 3D model?

Error when import jnerf

请问,当完成前述安装步骤,且import jittor正常,python3.7 -m jittor.test.test_example也正常,但import jnerf时,会出现以下报错:
请问这是什么原因,请问如何解决?
Ubuntu 18.04
CUDA Version 10.1.243
python 3.7.6

在python命令行中执行import jnerf后:

[i 0616 11:04:00.885104 04 cuda_flags.cc:32] CUDA enabled.
nvcc fatal : Unknown option '-extended-lambda'
Traceback (most recent call last):
File "", line 1, in
File "/root/projects/JNeRF/python/jnerf/init.py", line 2, in
from . import models
File "/root/projects/JNeRF/python/jnerf/models/init.py", line 1, in
from . import networks
File "/root/projects/JNeRF/python/jnerf/models/networks/init.py", line 1, in
from . import ngp_network
File "/root/projects/JNeRF/python/jnerf/models/networks/ngp_network.py", line 7, in
from jnerf.ops.code_ops.fully_fused_mlp import FullyFusedMlp_weight
File "/root/projects/JNeRF/python/jnerf/ops/code_ops/fully_fused_mlp.py", line 6, in
from jnerf.ops.code_ops.global_vars import global_headers, proj_options
File "/root/projects/JNeRF/python/jnerf/ops/code_ops/global_vars.py", line 27, in
gv.sync()
RuntimeError: Wrong inputs arguments, Please refer to examples(help(jt.sync)).

Types of your inputs are:
self = Var,
args = (),

The function declarations are:
void sync(bool device_sync = false, bool weak_sync = true)

Failed reason:[f 0616 11:04:03.509241 04 parallel_compiler.cc:329] Error happend during compilation:
[Error] source file location:/root/.cache/jittor/jt1.3.4/g++7.5.0/py3.7.6/Linux-3.10.0-1x5c/IntelRXeonRGolx80/default/cu10.1.243_sm_70/jit/code__IN_SIZE_0__OUT_SIZE_1__out0_dim_1__out0_type_int32__HEADER___include__pcg32_h__names___hash_45159d6c03260bf1_op.cc
Compile operator(0/1)failed:Op(2:0:1:1:i0:o1:s0,code->3)

Reason: [f 0616 11:04:03.508791 04 log.cc:608] Check failed ret(256) == 0(0) Run cmd failed: "/usr/local/cuda/bin/nvcc" "/root/.cache/jittor/jt1.3.4/g++7.5.0/py3.7.6/Linux-3.10.0-1x5c/IntelRXeonRGolx80/default/cu10.1.243_sm_70/jit/code__IN_SIZE_0__OUT_SIZE_1__out0_dim_1__out0_type_int32__HEADER___include__pcg32_h__names___hash_45159d6c03260bf1_op.cc" -std=c++14 -Xcompiler -fPIC -Xcompiler -march=native -Xcompiler -fdiagnostics-color=always -lstdc++ -ldl -shared -I"/root/.local/lib/python3.7/site-packages/jittor/src" -I/opt/conda/include/python3.7m -I/opt/conda/include/python3.7m -DHAS_CUDA -DIS_CUDA -I"/usr/local/cuda/include" -I"/root/.local/lib/python3.7/site-packages/jittor/extern/cuda/inc" -lcudart -L"/usr/local/cuda/lib64" -Xlinker -rpath="/usr/local/cuda/lib64" -I"/root/.cache/jittor/jt1.3.4/g++7.5.0/py3.7.6/Linux-3.10.0-1x5c/IntelRXeonRGolx80/default/cu10.1.243_sm_70" -L"/root/.cache/jittor/jt1.3.4/g++7.5.0/py3.7.6/Linux-3.10.0-1x5c/IntelRXeonRGolx80/default/cu10.1.243_sm_70" -Xlinker -rpath="/root/.cache/jittor/jt1.3.4/g++7.5.0/py3.7.6/Linux-3.10.0-1x5c/IntelRXeonRGolx80/default/cu10.1.243_sm_70" -L"/root/.cache/jittor/jt1.3.4/g++7.5.0/py3.7.6/Linux-3.10.0-1x5c/IntelRXeonRGolx80/default" -Xlinker -rpath="/root/.cache/jittor/jt1.3.4/g++7.5.0/py3.7.6/Linux-3.10.0-1x5c/IntelRXeonRGolx80/default" -l:"jit_utils_core.cpython-37m-x86_64-linux-gnu".so -l:"jittor_core.cpython-37m-x86_64-linux-gnu".so -x cu --cudart=shared -ccbin="/usr/bin/g++" --use_fast_math -w -I"/root/.local/lib/python3.7/site-packages/jittor/extern/cuda/inc" -arch=compute_70 -code=sm_70 -I/root/projects/JNeRF/python/jnerf/ops/code_ops/../op_include/eigen -I/root/projects/JNeRF/python/jnerf/ops/code_ops/../op_include/include -I/root/projects/JNeRF/python/jnerf/ops/code_ops/../op_include/pcg32 -I/root/projects/JNeRF/python/jnerf/ops/code_ops/../op_include/../op_header -DGLOBAL_VAR --extended-lambda --expt-relaxed-constexpr -o "/root/.cache/jittor/jt1.3.4/g++7.5.0/py3.7.6/Linux-3.10.0-1x5c/IntelRXeonRGolx80/default/cu10.1.243_sm_70/jit/code__IN_SIZE_0__OUT_SIZE_1__out0_dim_1__out0_type_int32__HEADER___include__pcg32_h__names___hash_45159d6c03260bf1_op.so"

NeuS运行时有Bug

tools/run_net.py 中要在51行前面加一句is_continue = False
projects/neus/configs/neus_womask.py 中第13行dataset_dir = '/data/yqs/mesh_recon/NeuS_jittor/data_thin_structure/thin_cube'
要改一下,去掉前面的‘/’,改成dataset_dir = 'data/yqs/mesh_recon/NeuS_jittor/data_thin_structure/thin_cube'

Runner.save_ckpt在训练中使用将导致下一轮循环报错

runner.py: Runner.train 的循环中加入self.save_ckpt(path)来每隔几个iter保存一次参数,但这导致下一轮 iter 中报错

代码如下:

def train(self):
    for i in tqdm(range(self.start, self.tot_train_steps)):
        self.cfg.m_training_step = i
        img_ids, rays_o, rays_d, rgb_target = next(self.dataset["train"])
        training_background_color = jt.random([rgb_target.shape[0],3]).stop_grad()

        rgb_target = (rgb_target[..., :3] * rgb_target[..., 3:] + training_background_color * (1 - rgb_target[..., 3:])).detach()

        pos, dir = self.sampler.sample(img_ids, rays_o, rays_d, is_training=True)
        network_outputs = self.model(pos, dir)
        rgb = self.sampler.rays2rgb(network_outputs, training_background_color)

        loss = self.loss_func(rgb, rgb_target)
        self.optimizer.step(loss)
        self.ema_optimizer.ema_step()
        if self.using_fp16:
            self.model.set_fp16()

        if i>0 and i%self.val_freq==0:
            psnr=mse2psnr(self.val_img(i))
            print("STEP={} | LOSS={} | VAL PSNR={}".format(i,loss.mean().item(), psnr))

        # code add here
        if i>0 and i % self.cfg.i_weights == 0:
            path = os.path.join(self.save_path, f'{i:06d}.tar')
            self.save_ckpt(path)
    self.save_ckpt(os.path.join(self.save_path, "params.pkl"))
    self.test()

经调查,在save_ckpt之后,Runner.optimizer._nested_optimizer 的参数从 jt.Var 变成了 np.ndarray

完整的出错归因如下:

Runner.save_ckptself.optimizer._nested_optimizer.state_dict() 调用了 jnerf/optims/adam.py: defaults 函数,返回了对 self.param_groups 的引用。而 jt.save 中的 dfs 函数使用了 x[i] = dfs(x[i]) 这样in-place的修改方式,导致 Optimizer.param_groups 也发生了变化

MacOS support

How to get Macbook M1 support?.

When trying to run it shows the following error:
"AssertionError: Windows/MacOS is not supported yet, everyone is welcome to contribute to this"

Why dont you provide the implementation of calc_rgb.o?

Firstly, thank you for your contribution of creating such exciting project for Nerf area.

However, when we tried to figure out the way jittor runs through behind rendering procedure, we were frustrated finding that you provided calc_rgb API in a HARDCORE WAY, i.e. calc_rgb.o. I wonder why the implementation of that file not exposed and what's your plan for that issue.

OOM when validating

Each time I run lego or fox sample, the display memory warning promps immediately when it is validating. The memory-usage when training is about 5400M/ 12G. But it fills up the memory when validate, and the whole training is suspended therefore. Is it normal or what i can do to reduce the memory usage?

It promts:
[w 0718 17:48:56.120434 88 cuda_device_allocator.cc:29] Unable to alloc cuda device memory, use unify memory instead. This may cause low performance.

THANKS A LOT

运行fox代码时报错

运行python3.x ./tools/run_net.py --config ./projects/ngp/configs/ngp_fox.py 这个命令时,报错:
terminate called after throwing an instance of 'std::runtime_error'
what(): [f 0220 15:02:55.779228 76 helper_cuda.h:128] CUDA error at /root/.local/lib/python3.8/site-packages/jittor/src/ops/array_op.cc:32 code=2( cudaErrorMemoryAllocation ) cudaStreamCreateWithFlags(&stream, cudaStreamNonBlocking)
Aborted (core dumped)

求问应该怎么解决呀

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.