GithubHelp home page GithubHelp logo

onnx-typecast's Introduction

onnx-typecast

A simple python script to typecast ONNX model parameters from INT64 to INT32.

Why?

I wanted to play around with ONNX.js and soon figured out that it doesn't support onnx models with INT64 parameters. Also, OpenCV doesn't seem to support INT64 parameters.

What does this script do?

  • The script goes through the parameters of each node of the onnx model and blindly converts INT64 parameters to INT32.
  • It also converts the constant parameters from INT64 to INT32.
  • Finally, creates a new model and saves it.

What's the catch?

  • The script does not handle overflows and blindly converts all INT64 parameters to INT32.
  • So ops that require >INT32.max or <INT32.min values may not perform as expected.
  • Please feel free to modify the script to account for this.

Alright. How do I use it?

  • simple.
  • Install requirements: pip install -r requirements.txt
  • run: python convert.py path/to/int64_model.onnx path/to/converted_int32_model.onnx

Also Checkout

onnx-typecast's People

Contributors

aadhithya avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

onnx-typecast's Issues

Bad conversion of superpoint.onnx model

Hello,

I have tried to use your script (thanks for sharing!) to convert the model of superpoint.onnx (available here). Apparently the model is converted without any issue, but then when I try to load it with onnxruntime-web I get the following error:

Error: Can't create a session. ERROR_CODE: 1, ERROR_MESSAGE: Type Error: Type parameter (T) of Optype (Mul) bound to different types (tensor(int64) and tensor(int32) in node (/Mul_1).
    at t.checkLastError (bundle.min.js:1:307223)

On the other hand, if I use https://netron.app/ to check the model I can see that the "keypoints" output still has Int64 type :
Screenshot 2023-10-27 at 09 00 40

I'm working with the following versions since I was not able to install the versions specified in the requirements:

numpy-1.26.1
typer-0.9.0
onnx-1.14.1
colorlog-4.7.2

Cast Ops in onnx file

Sometimes a onnx file has Cast Ops that cast data type to INT64. The case needs to be handled as well, by just modifying the attribute of Cast Ops to INT32.

This is an invalid model. Type Error: Type 'tensor(int32)' of input parameter is invalid.

Thanks for great work!

I'm getting this error after applying the conversion to SwinB transformer (when trying to use it for inference)

ort_sess = ort.InferenceSession('latest.i32.onnx')
  File "/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 360, in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
  File "python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 397, in _create_inference_session
    sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)
onnxruntime.capi.onnxruntime_pybind11_state.InvalidGraph: [ONNXRuntimeError] : 10 : INVALID_GRAPH : Load model from latest.i32.onnx failed:This is an invalid model. Type Error: Type 'tensor(int32)' of input parameter (7369) of operator (Reshape) in node (/model/backbone/patch_embed/Reshape) is invalid.

any idea what might be going wrong? or how to work around it ? :)

Conversion of MODNet .onnx model

Thanks for sharing this script, i tried it on MODNet. (background removal open source codebase)

I had an error running it, it worked but when trying to do inference i get an error due to the ReShape operator

INVALID_GRAPH : Load model from modnetpf16simint32.onnx failed:This is an invalid model. Type Error: Type 'tensor(int32)' of input parameter (480) of operator (Reshape) in node (Reshape_177) is invalid.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.