GithubHelp home page GithubHelp logo

Comments (5)

volcacius avatar volcacius commented on May 26, 2024

Hi Tracy,

Yes can do it with some work.
First thing, in the mobilenetv1 example you are mentioning, Brevitas introduce some new learned parameters (the scale factor for the activations), so when you are loading a pretrained floating point model to quantize it, Pytorch is going to complain that it can't find those parameters in the pretrained model. To fix it, you can set the env flag BREVITAS_IGNORE_MISSING_KEYS=1.
The other important point is that you should really maintain the same hierarchy of nn.Module between the floating point and the quantized implementation of the model, otherwise Pytorch won't be able to match the pretrained weights to the corresponding quantized modules. What I usually do is insert all the quantized layers with quantization disabled (setting quantization type to QuantType.FP) and make sure I can reproduce the original floating point accuracy. Once I'm sure everything is okay, I enable quantization and start retraining.
For a reference on how to quantize the residual connections, look at ProxylessNAS, it's quite similar to MobileNet V2. The idea is just to define a QuantHardTanh, which behaves like a quantized identity, and use it before and after the add.
Let me know how it goes.

Alessandro

from brevitas.

xfeng23 avatar xfeng23 commented on May 26, 2024

Hi @alessandro,

Thank you so much for such a detailed answer. This is my another account by the way. I did as you said above. One problem is that I set the env flag BREVITAS_IGNORE_MISSING_KEYS=1. It is ok when I set quantization type to QuantType.FP. But it will miss keys related to quantization when loading pre-train model when I set quantization type as QuantType.INT.
Something like:
Missing key(s) in state_dict: "features.0.2.act_quant_proxy.fused_activation_quant_proxy.tensor_quant.scaling_impl.learned_value",
"features.1.shared_act.act_quant_proxy.fused_activation_quant_proxy.tensor_quant.scaling_impl.learned_value", ......

Do you have any ideas about this?
And another question is that if nn.Dropout has a corresponding operation that can take QuantTensor as input?

Thank you so much!

Best,
Tracy

from brevitas.

volcacius avatar volcacius commented on May 26, 2024

Hi Tracy,

Can you please double check that when you set the env flag BREVITAS_IGNORE_MISSING_KEYS=1, if you do

import brevitas.config as config
print(config.IGNORE_MISSING_KEYS)

it prints True? That's the variable that reads the env setting, so it should be true. Otherwise it means that the env is not being set properly.
You can also either set config.IGNORE_MISSING_KEYS=True manually, orwhen you load the pretrained model do model.load_state_dict(my_state_dict, strict=False). Be careful that with strict=False you disable any sort of check on the state dict.

Regarding dropout, I don't have a pre-made layer but something like this should work (off the top of my head, haven't tested it). QuantTensor is just a tuple, so you can simply unpack it, pass it through the forward function, and the pack the output back into a QuantTensor:

import torch.nn as nn
from brevitas.quant_tensor import QuantTensor


QuantDropout(nn.Dropout):

def forward(input_quant_tensor):
    inp, scale, bit_width = input_quant_tensor
    output = super(QuantDropout, self).forward(inp)
    output_quant_tensor = QuantTensor(tensor=output, scale=scale, bit_width=bit_width)
    return output_quant_tensor

You can take the same approach for nn.MaxPool2d too.
Let me know how it goes.

Alessandro

from brevitas.

xfeng23 avatar xfeng23 commented on May 26, 2024

Hi Alessandro,

Thanks for your quick reply. Now I can train it successfully. But one more issue. When I set quantization flag to FP, the model can be trained, and while training, the 'free -m' showed that memory used increased slowly and the training process was going normally. But when I set the quantization flag to INT, 'free -m' showed that memory used increased much faster and ran out of the CPU memory, and the training process will be stuck. Do you have any ideas about this issue?

Best,
Tracy

from brevitas.

volcacius avatar volcacius commented on May 26, 2024

Hi Tracy,

Training aware quantization is expensive compute and memory wise. The idea is always that you trading off increased training cost for reduced inference cost.
With QuantType.FP quantization is disabled, you are just computing standard floating point, so you get normal Pytorch resource utilization.
If you are going out of memory, you should lower your batch size. Training on a GPU rather than a CPU is also highly suggested, with only a CPU you won't get very far. If you don't have access to a GPU, you can get a free one (with some limitations) on Google Collab.

Good luck with your training.

Alessandro

from brevitas.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.