osvai / odconv Goto Github PK
View Code? Open in Web Editor NEWThe official project website of "Omni-Dimensional Dynamic Convolution" (ODConv for short, spotlight in ICLR 2022).
License: Apache License 2.0
The official project website of "Omni-Dimensional Dynamic Convolution" (ODConv for short, spotlight in ICLR 2022).
License: Apache License 2.0
batch_size=4 可以运行,batch_size=1,就不能运行,这是什么原因?
Hello, I would like to ask you a question. When I am training, I get this error: TypeError: init() got an unexpected keyword argument 'reduction'
. I can't think of a solution, do you have any insight?
Thank you very much for your proposed multi-dimensional dynamic convolution mode, which is very innovative. I would like to ask a question: Can you provide an ODconv with a 1D convolution level for reference?
定义输入通道128,输出通道128,kernelsize=1,stride=1,padding=0,dilation=1,groups=1.
当kernel_num为1时模型可以正常训练,kernel_num=4时报错:
output = output.view(batch_size, self.out_planes, output.size(-2), output.size(-1))
RuntimeError: shape '[16, 128, 160, 160]' is invalid for input of size 3276800
进入调试模式可以看到输入数据x的shape为[16,128,160,160],经过reshape后为[1,2048,160,160],
self.weight的尺寸为[4, 128, 128, 1, 1]
aggregate_weight的尺寸为[128, 128, 1, 1]
经过F.conv2d后output的尺寸为[1, 128, 160, 160]
与下一步的output.view(batch_size, self.out_planes, output.size(-2), output.size(-1))所需的输入尺寸不符合
请问是哪里有问题呢?
谢谢
y = _reconstruct(x, memo, *rv)
File "C:\D_installation_packet\Anaconda\installion_package\envs\yolov6\lib\copy.py", line 270, in _reconstruct
state = deepcopy(state, memo)
File "C:\D_installation_packet\Anaconda\installion_package\envs\yolov6\lib\copy.py", line 146, in deepcopy
y = copier(x, memo)
File "C:\D_installation_packet\Anaconda\installion_package\envs\yolov6\lib\copy.py", line 230, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "C:\D_installation_packet\Anaconda\installion_package\envs\yolov6\lib\copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "C:\D_installation_packet\Anaconda\installion_package\envs\yolov6\lib\copy.py", line 296, in _reconstruct
value = deepcopy(value, memo)
File "C:\D_installation_packet\Anaconda\installion_package\envs\yolov6\lib\copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "C:\D_installation_packet\Anaconda\installion_package\envs\yolov6\lib\copy.py", line 270, in _reconstruct
state = deepcopy(state, memo)
File "C:\D_installation_packet\Anaconda\installion_package\envs\yolov6\lib\copy.py", line 146, in deepcopy
y = copier(x, memo)
File "C:\D_installation_packet\Anaconda\installion_package\envs\yolov6\lib\copy.py", line 230, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "C:\D_installation_packet\Anaconda\installion_package\envs\yolov6\lib\copy.py", line 153, in deepcopy
y = copier(memo)
File "C:\D_installation_packet\Anaconda\installion_package\envs\yolov6\lib\site-packages\torch\tensor.py", line 55, in deepcopy
raise RuntimeError("Only Tensors created explicitly by the user "
RuntimeError: Only Tensors created explicitly by the user (graph leaves) support the deepcopy protocol at the moment
Thank you for your work, whether the pretrained weights of resnet50 used for the object detection in your work?
Bias is set to none in your code. Is this the same setting for the experiment in the paper? I couldn't find the corresponding description in the paper. If so, could you tell me the reason briefly? Thanks!
Line 126 in c63c08b
TypeError: conv2d() received an invalid combination of arguments - got (Tensor, weight=Tensor, bias=NoneType, stride=float, padding=int, dilation=int, groups=int), but expected one of:
Targeting to :output = F.conv2d(x, weight=self.weight.squeeze(dim=0), bias=None, stride=self.stride, padding=self.padding,
dilation=self.dilation, groups=self.groups)
dear
Can you tell me how you calculate MAdds?
There is a lot of confusion on the internet about how to calculate this.
Thank you very much!
I downloaded the model and got an 'archive' folder.When I try to load it with torch.load, I get an error '_pickle.UnpicklingError: A load persistent id instruction was encountered,but no persistent_load function was specified.‘
but no persistent_load function was specified.’, but I want to have a backbone pre-trained model (the best .pth file) to complete my other tasks, what should I do
odc模块里的kernel_size可以是1*1大小的吗?
Thank you for you work! How about the result in resnet101?
Thanks for your novel work!
But I found errors while using torch.summary() to calculate the number of parameters of ODConv.
It seems that the problem is caused by the output of self.attention(x)
File "/workspace/PanoFormer/PanoFormer/network/SphereConv2d.py", line 345, in _forward_impl_common
channel_attention, filter_attention, spatial_attention, kernel_attention = self.attention(x)
File "/opt/conda/envs/panoformer/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1547, in _call_impl
hook_result = hook(self, args, result)
File "/opt/conda/envs/panoformer/lib/python3.8/site-packages/torchsummary/torchsummary.py", line 22, in hook
summary[m_key]["output_shape"] = [
File "/opt/conda/envs/panoformer/lib/python3.8/site-packages/torchsummary/torchsummary.py", line 23, in <listcomp>
[-1] + list(o.size())[1:] for o in output
Hi, great job!
I want to know when you can release your code and models?
I have reproduced your code but produced inferior results compared with DyConv. Maybe some implementation details are lost. Looking forward to your release~
Thanks!
After using this convolution to replace the backbone network convolution, the parameter quantity decreases??
能否像调用functional.conv2d的方式一样调用odconve
Can you provide the download link for Baidu Cloud?thank you very much!
你好,请问代码大概会在什么时候放出来呀~
Thank you very much for your proposed multi-dimensional dynamic convolution mode, which is very innovative. I would like to ask a question: Can you provide an ODconv with a 1D convolution level for reference?
For yolov7, which part of the configuration file yolov7.yaml should I add this module to
When I try to apply ODConv to yolov7 , the following prompt was shown:
File "D:\YOLOv7-ODConv\models\ODConv.py", line 81, in forward x = self.bn(x) File "D:\Anaconda\envs\yolov7\lib\site-packages\torch\nn\modules\module.py", line 1190, in _call_impl return forward_call(*input, **kwargs) File "D:\Anaconda\envs\yolov7\lib\site-packages\torch\nn\modules\batchnorm.py", line 182, in forward self.eps, File "D:\Anaconda\envs\yolov7\lib\site-packages\torch\nn\functional.py", line 2448, in batch_norm _verify_batch_size(input.size()) File "D:\Anaconda\envs\yolov7\lib\site-packages\torch\nn\functional.py", line 2416, in _verify_batch_size raise ValueError("Expected more than 1 value per channel when training, got input size {}".format(size)) ValueError: Expected more than 1 value per channel when training, got input size torch.Size([1, 16, 1, 1])
I am relative new to this field, I will be glad to elaborate but I have no idea what else to report
I see you did not finish the function of "update_temperature" in this code version, right?
Updating T from 30 to 1, which is 30 27 24 ...3 1 1 1 ... 1.
But I see your code "class Attention" in a fixed "self.temperature = 1.0".
File "E:\paper_code\paper_3\Paper_6cls\model\module.py", line 272, in forward
return self._forward_impl(x)
File "E:\paper_code\paper_3\Paper_6cls\model\module.py", line 250, in _forward_impl_common
channel_attention, filter_attention, spatial_attention, kernel_attention = self.attention(x)
File "C:\D_installation_packet\Anaconda\installion_package\envs\PaperMcnn\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "E:\paper_code\paper_3\Paper_6cls\model\module.py", line 212, in forward
x = self.bn(x)
File "C:\D_installation_packet\Anaconda\installion_package\envs\PaperMcnn\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\D_installation_packet\Anaconda\installion_package\envs\PaperMcnn\lib\site-packages\torch\nn\modules\batchnorm.py", line 140, in forward
self.weight, self.bias, bn_training, exponential_average_factor, self.eps)
File "C:\D_installation_packet\Anaconda\installion_package\envs\PaperMcnn\lib\site-packages\torch\nn\functional.py", line 2144, in batch_norm
_verify_batch_size(input.size())
File "C:\D_installation_packet\Anaconda\installion_package\envs\PaperMcnn\lib\site-packages\torch\nn\functional.py", line 2111, in _verify_batch_size
raise ValueError("Expected more than 1 value per channel when training, got input size {}".format(size))
ValueError: Expected more than 1 value per channel when training, got input size torch.Size([1, 16, 1, 1])
请问该如何解决啊,谢谢
My net is like conv-bn-relu。
I replaced nn.conv2d to odconv2d, and wanted to fused conv and batchnorm in inference.
我替换成了 ODConv 以后如何像之前那样在推理阶段融合 Conv 和 BN 层?在 Conv-BN-ReLU/SiLU/GELU 这样结构的 CNN 网络中?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.