abdo-eldesokey / nconv Goto Github PK
View Code? Open in Web Editor NEWA PyTorch implementation for our work "Confidence Propagation through CNNs for Guided Sparse Depth Regression"
License: GNU General Public License v3.0
A PyTorch implementation for our work "Confidence Propagation through CNNs for Guided Sparse Depth Regression"
License: GNU General Public License v3.0
Hey there.
I'm a little confused by which datasets you used for training exp_guided_nconv_cnn_l2
, it seems to be train+selval
according to run-nconv-cnn.py#L67 .
But it's validate on selval
too.
Also what's the difference between nconv/workspace/exp_guided_nconv_cnn_l2/checkpoints/CNN_ep0039.pth.tar
and the one for test set, which datasets do you train it on.
Hi,
I have a question about the code.
When I train the model on KittiDataset, I find that the program puts a lot of data to the cached memory until the memory is full.
when i use "top" command in Terminal, I see that: (I use the exp=exp_guided_enc_dec)
Have you ever encountered this situation? Did you notice that? (Although it might not affect training? )
Many thanks!
Thank you for sharing this wonderful work,
When I go to the nyu repo, ican't find the nyu pretrained weight. Can you provide it?
Hi, I have a few questions about your implementation.
This is the exp I use : exp_unguided_depth
But got this:
TypeError: init() missing 1 required positional argument: 'padding_mode'
Could you help us figure it out?
The demo works very well when I choose -exp exp_guided_nconv_cnn_l2
Hi,
Is there an error in the evaluate function in kittidepthtrainer?
outputs *= self.params['data_normalize_factor']/256
labels *= self.params['data_normalize_factor']/256
why do you divide outputs by 256? I think the output should be just multiplied by data_normmalize_factor.
Many thanks!
Hi!
When I run:
python3 run-nconv-cnn.py -mode eval -exp /home/chen/nconv/workspace/exp_guided_nconv_cnn_l1
This is a error:
Traceback (most recent call last):
File "run-nconv-cnn.py", line 56, in <module>
dataloaders, dataset_sizes = KittiDepthDataloader(params)
File "/home/chen/nconv/dataloader/KittiDepthDataloader.py", line 44, in KittiDepthDataloader
dataloaders['train'] = DataLoader(image_datasets['train'], shuffle=True, batch_size=params['train_batch_sz'], num_workers=4)
File "/home/chen/.local/lib/python3.5/site-packages/torch/utils/data/dataloader.py", line 802, in __init__
sampler = RandomSampler(dataset)
File "/home/chen/.local/lib/python3.5/site-packages/torch/utils/data/sampler.py", line 64, in __init__
"value, but got num_samples={}".format(self.num_samples))
ValueError: num_samples should be a positive integeral value, but got num_samples=0
Could you tell me why?
Thanks!
Hello,
I am very interested in running your evaluation code with the trained model. Will you provide it soon?
Thanks,
Adam
hi, recently I looked into your code and got doubt on how to obtain a confidence map from the sparse depth map. Please help me to go through this doubt, Thanks in advance.
Hello ! When retraining the code for NYU dataset, the original file nconv.py is as followed:
but when running, it meets a problem that reminds me that missing argument:
And after I add the argument "padding_mode = 'zeros' ",it reminds me size dismatch.
So I'm confused and wondering how to fix this problem. Looking forward to your reply.
Hi,
I have a question about the EnforcePos part in file ncon.py.
Why do you choose to use a softplus function as a forward_pre_hook, instead of just use softplus in forward function?
My understanding is that, after lots of training iteration, the forward_pre_hook(softplus) will work on weights multiple times, which makes W = softplus(••••••softplus(W)) ? (althouth you set the beta=10, very close to RELU? it will be almost an identity map when W > 0?) And for last training iteration, after the backward optimization, we do not apply the softplus to weights. (Which I may think that the weights are already converged?)
But I am still confused about that...
Can we just simply apply softplus in forward function, for both train and eval phase?
Many thanks!
Hi, Abdelrahman!
I notice that in normalized convolution the basis is set to only one basis constant function f(x)=1. I wonder why we don't have 2 or more basis functions and what would happen than. I would be grateful if you would like to share your view on the basis setting.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.