Comments (5)
Hi there,
Sorry, that whole quantization folder needs to be redone since it's mostly just pseudocode! If you don't need the quantization step then take a look at the stuff in the sparsity folder - the iPython notebook is what I have put most of my time into.
If you want to use the quantization code as a skeleton for your own implementation then you can just put any of these files in the quantization folder and rename it to tinynet.py (or import it by its actual name).
I'll get round to fixing all the quantization code soon I hope.
Thanks,
Jack
from deep-compression.
Thanks, I have run your sparsity code, but after prune, the net's params didn't decreased...maybe after I change the code to .py file caused some error.
Could you help me? I find some quantization code on Github, but after quantized, the model's memory are not changed, while it should be smaller? How to save the model after quantized?
from deep-compression.
Any chance you can put your version of this in a public repository? That way I can look through your code and try to help.
Usually in pytorch to save a model you would use something along the lines of:
state = {
'model': model.state_dict(),
'acc': acc,
}
filename = './ckpt_' + model_name + '.t7'
if not os.path.isdir('checkpoint'):
os.mkdir('checkpoint')
torch.save(state, filename)
from deep-compression.
https://github.com/aaron-xichen/pytorch-playground
python quantize.py --type alexnet --quant_method minmax --param_bits 8 --fwd_bits 8 --bn_bits 8 --ngpu 1
type=alexnet, quant_method=minmax, param_bits=8, bn_bits=8, fwd_bits=8, overflow_rate=0.0, acc1=0.5514, acc5=0.7819
print('save model...')
torch.save(model_raw.state_dict(), 'model_quantized.pth')
I am puzzled, after quantized, the model's memory are still 234M, how to save the quantied model?
Thanks for your help.
"This is expected. Because it is a simulation tool. The quantized parameters are represented in float format, not integers. You need to do some post precess work yourself before deployment."
from deep-compression.
I have run another quantizition method:
https://github.com/TropComplique/trained-ternary-quantization
his model are also unchanged...
"
Simple example how to save quantized values as int8:
import numpy as np
full precision weights
x = np.random.randn(5000, 5000).astype('float32')
np.save('x.npy', x) # 96 MB
quantization
threshold = 0.7
x[x > threshold] = 1.0
x[np.logical_and(x <= threshold, x >= -threshold)] = 0.0
x[x < -threshold] = -1.0
x = x.astype('int8')
np.save('quantized_x.npy', x) # 24 MB
In case of CNN weights, you have to find an appropriate
data structure to store 2-bit values.
Maybe try numpy.packbits.
"
from deep-compression.
Related Issues (6)
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from deep-compression.