GithubHelp home page GithubHelp logo

Comments (7)

zk1998 avatar zk1998 commented on July 20, 2024

from tinyneuralnetwork.

peterjc123 avatar peterjc123 commented on July 20, 2024

Is there a way to stop these values being created? I can't immediately see where they come from to know if I can adjust the model or similar, but as they're unused, is there a way to have them automatically pruned?

Would you please try quantizer = PostQuantizer(..., config={'extra_tracer_opts': {"eliminate_dead_graph": True}})?

A side question - is there a way to convert expected inputs from float to int as well? I have image input to the network that I have to convert from 0-256 to -1.0-1.0, so if there was a way to convert to sticking with integer, that would also be useful.

converter = TFLiteConverter(..., fuse_quant_dequant=True) might work for your case.

from tinyneuralnetwork.

BmanClark avatar BmanClark commented on July 20, 2024

Thank you for the tips! That's got me a bit further, although now I'm getting:
RuntimeError: createStatus == pytorch_qnnp_status_success INTERNAL ASSERT FAILED at "../aten/src/ATen/native/quantized/cpu/BinaryOps.cpp":203, please report a bug to PyTorch. failed to create QNNPACK Add operator
So I'll go and do that and report the bug to PyTorch, after I've made sure I'm on latest PyTorch.
The eliminate_dead_graph seems like a very useful option people are likely to want, and might want to be in the example or more prominent in some way?

And I look forward to trying the fuse_quant_dequant=True. Do I make the dummy input int for that final conversion step as well presumably? Does it cause any issues with PTQ calibration as the expected ranges will be different?

from tinyneuralnetwork.

peterjc123 avatar peterjc123 commented on July 20, 2024

Thank you for the tips! That's got me a bit further, although now I'm getting:
RuntimeError: createStatus == pytorch_qnnp_status_success INTERNAL ASSERT FAILED at "../aten/src/ATen/native/quantized/cpu/BinaryOps.cpp":203, please report a bug to PyTorch. failed to create QNNPACK Add operator

This may happen if

  1. you forget to load the weights
  2. the values you try to add are very different e.g. 1e-8 + 1.0 I don't think it should be quantized in that case.

The eliminate_dead_graph seems like a very useful option people are likely to want, and might want to be in the example or more prominent in some way?

Sure, I can add this.

And I look forward to trying the fuse_quant_dequant=True. Do I make the dummy input int for that final conversion step as well presumably? Does it cause any issues with PTQ calibration as the expected ranges will be different?

Nope, the input for the conversion step won't affect the PTQ calibration, because the quantized ranges are frozen when you call quantizer.convert().

from tinyneuralnetwork.

BmanClark avatar BmanClark commented on July 20, 2024

This may happen if

  1. you forget to load the weights
  2. the values you try to add are very different e.g. 1e-8 + 1.0 I don't think it should be quantized in that case.

I'm definitely loading the weights, so I guess I'll be looking for if there's something like 2. And then if it's important or if it's something that should be 0, but is hitting float rounding errors or similar.
Thanks for the information!

from tinyneuralnetwork.

BmanClark avatar BmanClark commented on July 20, 2024

I think it must be case 2, as I eventually found in the output:
Error in QNNPACK: failed to create add operator with 8.319039e-06 A-to-output scale ratio: scale ratio must be in [2**-14, 2**8) range
I'm guessing you think that this means the network isn't a good candidate for quantization? Surely if you're adding 2 things on very different scales the smaller is going to be ignored in float32 or int8? But maybe it makes quantization have impossible calculations I guess.

PyTorch don't make it easy to report bugs (not sure I can make a minimal reproducer), and from what you say it might not be a bug anyway? I've tried to ask a question on PyTorch forums, but that's awaiting moderation...

Thanks for your help (and if you've got any other ideas!), but it looks like this one might just be a quantization fail?

from tinyneuralnetwork.

peterjc123 avatar peterjc123 commented on July 20, 2024

@BmanClark I guess you may print out the input/output scales of the add operations to find out something weird. You can do that on a model after quantizer.convert.

from tinyneuralnetwork.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.