reproducebyyq / channel_pruning_yq Goto Github PK
View Code? Open in Web Editor NEWThis repo is re-produce for Channel_pruning
License: MIT License
This repo is re-produce for Channel_pruning
License: MIT License
When I test it ,after lasso regression ,I do this:
count = sum(idxs)
idxs[0:count]=True
idxs[count:]=False
the number of filters is the same , but the first count filters to preserve. the top5 and top1 is no descend ,why?
thanks for your sharing, is simpler to understand than oringal code. then,the question is as follows:
{
"conv1_2": 24,
"conv2_1": 48,
"conv2_2": 48,
"conv3_1": 64,
"conv3_2": 128,
"conv3_3": 160,
"conv4_1": 192,
"conv4_2": 192,
"conv4_3": 256,
"conv5_1": 320,
"conv5_2": 320,
"conv5_3": 320
}
How to calculate and get this number?
Hi,thanks for sharing, but I get the following error when I run your code:
Traceback (most recent call last):
File "./low_rank_and_channel_pruning.py", line 12, in
from util import * # my function
ImportError: No module named 'util'
Can you share this file you wrote?thanks
Hi, thank you for your code to help me understand the author's long code, but I have some question about your code.
First,why the "nPicsPerBatch" in your code is k_h*k_w? And "N" in your code is "N = nBatch * nPointsPerLayer" ,could you please explain it?
Second, is there any mistake in the code [ https://github.com/ReProduceByYQ/Channel_pruning_yq/blob/54bb57b96b884d6ae4b2631c7e92150322cabb3e/util.py#L198] ? When I run your code ,there is an error , since the randx=randy=10,then the point_feat should be (1,48, 10, 10). After reshaping, the size should be(480,10). Is that right? I am hoping for your reply. Thank you so much!
剪枝resnet实现了没有,看了下原文实现好像将caffe底层也改了,我想不改底层,能否给点建议。
Hi Jason Yue,
Your code helps me A LOT (^_^) during understanding He Yihui's channel pruning paper. My question is, have you try fully prune VGG16 (from conv1_1 to conv4_3) using your implementation? Did you manage to achieve only 1.7% Top5 accuracy lost under 5X as the paper reported?
I also re-implemented He Yihui's paper with only channel pruning, but when I prune VGG16 for 5X (pruning each layer's channel number to the exact same number of He Yihui's 5X caffemodel prototxt), and the Top5 accruacy loss is around 5%, and the paper repored 1.7% only.
Did you also experience this issue? Thanks.
压缩的时候使用的是验证集,测试的时候还是验证集,这样合理吗?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.