zijundeng / bdrar Goto Github PK
View Code? Open in Web Editor NEWCode for the ECCV 2018 paper "Bidirectional Feature Pyramid Network with Recurrent Attention Residual Modules for Shadow Detection"
Code for the ECCV 2018 paper "Bidirectional Feature Pyramid Network with Recurrent Attention Residual Modules for Shadow Detection"
Hi ,
I used pytorch-0.4.0 , python2 will make a error:
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation.
Use pytorch-0.3 to run normally.
UCF dataset cannot be accessed, can you provide it?
Thank You!!
Hi guys,
Thanks so much for open sourcing your code. However without any license no-one can run/develop from your code. Could you please add a license too, with the code?
If you want it to be fully open-source, I would suggest an Apache 2.0 License: https://tldrlegal.com/license/apache-license-2.0-(apache-2.0)
I cannot load the pre-trained model when run infer.py. There are many mismatched parameters.
For example, in the refine3_h2l module of the BDRAR network, the parameters'name in network is
################
refine3_h2l.0.weight
refine3_h2l.1.weight
refine3_h2l.1.bias
refine3_h2l.3.weight
refine3_h2l.4.weight
refine3_h2l.4.bias
refine3_h2l.6.weight
refine3_h2l.7.weight
refine3_h2l.7.bias
refine3_l2h.0.weight
refine3_l2h.1.weight
refine3_l2h.1.bias
refine3_l2h.3.weight
refine3_l2h.4.weight
refine3_l2h.4.bias
refine3_l2h.6.weight
refine3_l2h.7.weight
refine3_l2h.7.bias
################
However, in the pre-trained models, the corresponding parameters'name is:
################
refine3_hl.0.weight
refine3_hl.1.weight
refine3_hl.1.bias
refine3_hl.1.running_mean
refine3_hl.1.running_var
refine3_hl.3.weight
refine3_hl.4.weight
refine3_hl.4.bias
refine3_hl.4.running_mean
refine3_hl.4.running_var
refine3_hl.6.weight
refine3_hl.7.weight
refine3_hl.7.bias
refine3_hl.7.running_mean
refine3_hl.7.running_var
refine3_lh.0.weight
refine3_lh.1.weight
refine3_lh.1.bias
refine3_lh.1.running_mean
refine3_lh.1.running_var
refine3_lh.3.weight
refine3_lh.4.weight
refine3_lh.4.bias
refine3_lh.4.running_mean
refine3_lh.4.running_var
refine3_lh.6.weight
refine3_lh.7.weight
refine3_lh.7.bias
refine3_lh.7.running_mean
refine3_lh.7.running_var
################
Hello there,
I am trying to fine tune the model you shared with a new dataset, but I can't get it running due to the following error:
lib/python2.7/site-packages/torch/nn/functional.py:1749: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details. "See the documentation of nn.Upsample for details.".format(mode)) Traceback (most recent call last): File "train.py", line 168, in <module> main() File "train.py", line 95, in main train(net, optimizer) File "train.py", line 136, in train loss.backward() File "/home/eloyroura/anaconda3/envs/BDRAR/lib/python2.7/site-packages/torch/tensor.py", line 93, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "/home/eloyroura/anaconda3/envs/BDRAR/lib/python2.7/site-packages/torch/autograd/__init__.py", line 89, in backward allow_unreachable=True) # allow_unreachable flag RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation
Any hint of what i am doing wrong?
So can you help me solve this issue? Very thank you.
code crashes after BDRAR().cuda()
At the beginning, I encountered a problem like this:
"RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation"
So, I solved it by changing
for m in self.modules():
if isinstance(m, nn.ReLU) or isinstance(m, nn.Dropout):
m.inplace = True
to be
for m in self.modules():
if isinstance(m, nn.ReLU):
m.inplace = True
But a new problem occurs , "RuntimeError: cuda runtime error (2) : out of memory at /pytorch/aten/src/THC/generic/THCStorage.cu:58".
Can't a single 1080 ti support this model?
when i run this code,it show Attempting to deserialize object on CUDA device 1 but torch.cuda.device_count() is 1. Please use torch.load with map_location to map your storages to an existing device.Do you know how to solve this ?
thank you for your great job!
but i want to remove the shadow,how to do this?
Could you give me some ideas?
Hello, first thanks for sharing the code and the trained network.
I downloaded 3000.pth that you provided, and wanted to do further fine tuning on other shadow dataset.
When I run train.py I saw that the file 3000_optim.pth is missing,
Can you upload it as well?
Many thanks,
Tamir
Thanks for sharing the whole training and testing codes. So could you further share the pre-trained BDRAR model for salient object detection?
Hi @zijundeng ,
I have loaded your pre-trained model and inference, but I got Ber = 5.20. I have ensured that my evaluation code is right, while I used your network, data loader and other.
I don't know whether is there some problems with your pre-trained model or other.
Thanks.
I try to run train.py,then error happen.Can anyone help?
Traceback (most recent call last):
File "D:\Program Files (x64)\PyCharm 2021.1.3\plugins\python\helpers\pydev\pydevd.py", line 1483, in exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "D:\Program Files (x64)\PyCharm 2021.1.3\plugins\python\helpers\pydev_pydev_imps_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "F:/20220410_BDRAR-master/train1.py", line 149, in
main()
File "F:/20220410_BDRAR-master/train1.py", line 78, in main
train(net, optimizer)
File "F:/20220410_BDRAR-master/train1.py", line 117, in train
loss.backward()
File "D:\anaconda3\envs\BDRAR\lib\site-packages\torch\tensor.py", line 245, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "D:\anaconda3\envs\BDRAR\lib\site-packages\torch\autograd_init.py", line 147, in backward
allow_unreachable=True, accumulate_grad=True) # allow_unreachable flag
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [1, 8, 128, 128]], which is output 0 of ReluBackward1, is at version 2; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
python-BaseException
I used pretrained model uploaded in the link and run inference for SBU test set.
These results do not match with the published results.
Why is such a case?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.