Comments (18)
Hello @xrenaa , I've added the cropping operation suggested by the authors here in the newest commit. It helped me solve the "all white" issue on drums
and mic
.
Do you want to try it again? You just need to re-clone the code and run the same instructions.
from nerf-pytorch.
Hi @yenchenlin , I am happy to see it converge finally. Thanks for your update!
I will close this issue.
from nerf-pytorch.
Hello, may I know your config?
from nerf-pytorch.
I tried both the configs:
- current ./configs/lego.txt:
expname = blender_paper_lego
basedir = ./logs
datadir = ./data/nerf_synthetic/lego
dataset_type = blender
no_batching = True
use_viewdirs = True
white_bkgd = True
lrate_decay = 500
N_samples = 64
N_importance = 128
N_rand = 1024
precrop_iters = 500
precrop_frac = 0.5
half_res = True
- previous config:
expname = lego_test
basedir = ./logs
datadir = ./data/nerf_synthetic/lego
dataset_type = blender
half_res = True
N_samples = 64
N_importance = 64
use_viewdirs = True
white_bkgd = True
N_rand = 1024
and the first output is all white, while the second produces a reasonable result ...
from nerf-pytorch.
Hi, thanks for your reply! I just follow the setting in the repo:
expname = lego_test
basedir = ./logs
datadir = ./data/nerf_synthetic/lego
dataset_type = blender
half_res = True
N_samples = 64
N_importance = 64
use_viewdirs = True
white_bkgd = True
N_rand = 1024
And I follow the installation of the environment.
However, the loss seems not to change and gets stuck about 0.13. This is weird. I do not change any of the original repo. Thank you!
from nerf-pytorch.
Hello, this is abnormal. Here is my output for the first 250 steps. The loss should drop down to ~0.03 at step 250.
I am attaching my setting for the experiments here:
You can find above files in the specified log path. Can you confirm they match exactly?
Additionally, here is output before loss that contains the dataset information:
Loaded blender (138, 400, 400, 4) torch.Size([40, 4, 4]) [400, 400, 555.5555155968841] ./data/nerf_synthetic/lego
Found ckpts []
Not ndc!
get rays
done, concats
shuffle rays
done
Begin
TRAIN views are [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47
48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95
96 97 98 99]
TEST views are [113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130
131 132 133 134 135 136 137]
VAL views are [100 101 102 103 104 105 106 107 108 109 110 111 112]
from nerf-pytorch.
@xrenaa any update on this?
from nerf-pytorch.
@yenchenlin Hi, I tried to re-clone the file and create new conda environments multiple times of both python 3.6 and 3.7 versions. However, the loss still stuck about 0.13. I am still trying to figure out the reason. And my output is the same as yours.
from nerf-pytorch.
I see, I will do my best to help. Can you report the PyTorch and CUDA version?
from nerf-pytorch.
Hi, I run on PyTorch 1.4 and Cuda 10.1.
And I find an interesting thing. At first, the RGB video is plain white and after about 600k iters, the loss come to 0.01 and the video becomes to like this:
from nerf-pytorch.
Hello @xrenaa , I think the package's version looks right. Do you have other machines to test this? Today, I test it on two brand new machines and it works normally. Let me know if you can reproduce the same issue on multiple machines.
I am using mini-conda to create the environment:
conda create -n tmp python=3.6
conda activate tmp
from nerf-pytorch.
Glad to know it work! In my experiences, adding this whenever white_bkgd = True
helps a lot :)
from nerf-pytorch.
I also have this problem with my implementation. The problem is indeed due to all white sampling, which is totally by chance... so I just rerun the experiment if I see the loss doesn't go down after ~100 steps, and finally it will work when I'm lucky in the first iters..
from nerf-pytorch.
@kwea123 I recommend the cropping solution mentioned above, it eliminates the need for luck.
from nerf-pytorch.
Increasing the batch size or changing the optimizer (I use RAdam) also solves the problem for me.
from nerf-pytorch.
I have the same problem on my own llff data. Actually, loss went down quickly and converged in a short time in my experiment. But the final rendering images are all black. I see that all your experiments didn't converge at the first stage which is different with my problem. Any idea?
from nerf-pytorch.
Hello,I need help.Please!!!
I inputed"python run_nerf.py --config configs/lego.txt",then
it shows"no ndc"
please help me ,thank you.
你好,我在pycharm中导入项目之后,加载数据lego.txt在data/nerf_synthetic目录下,开始训练数据,输入python run_nerf.py --config configs/lego.txt,却显示没有ndc,而且val不进行,为0。显示结果如下
请帮帮我,谢谢你
from nerf-pytorch.
from nerf-pytorch.
Related Issues (20)
- cuda HOT 1
- w HOT 1
- Where can I buy the lovely lego bulldozer in the dataset? HOT 1
- Why is "output_ch = 5 if args.N_importance > 0 " ? HOT 1
- ValueError: loaded state dict contains a parameter group that doesn't match the size of optimizer's group
- how can I convert the depth map into a point cloud, given the accurate camera intrinsics HOT 2
- Is there a Pytorch Mobile implementation for this model?
- How can I use nerf to reconstruct the tanksandtemples dataset clearly?
- How can I get SSIM and LPILS data?
- Evaluation
- what is bd_factor?
- The rendered img has a large area of blank ? HOT 2
- How to calculate SSIM and LPIPS
- Problems with running pre trained models HOT 1
- How do I use tensorboard to view the psnr curve
- I run run_nerf.py that have a question HOT 1
- Incorrect initialization of camera intrinsic parameters: K.
- RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
- TypeError: read() got an unexpected keyword argument 'ignoregamma' (Solution Provided)
- Cannot set up dependencies correctly
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from nerf-pytorch.