GithubHelp home page GithubHelp logo

zhenhuaw-me / tflite2onnx Goto Github PK

View Code? Open in Web Editor NEW
141.0 141.0 26.0 140.75 MB

Convert TensorFlow Lite models (*.tflite) to ONNX.

Home Page: https://zhenhuaw.me/tflite2onnx

License: Apache License 2.0

Python 97.94% Shell 2.06%
model-converter onnx pip tensorflow tflite

tflite2onnx's People

Contributors

briangrifiin avatar erizmr avatar ikbeomjeon avatar paulgavrikov avatar zhenhuaw-me avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tflite2onnx's Issues

Assertion error in isQuantize subroutine

Describe the bug
Quantize::isQuantize method is called from the very beginning of Quantize::parse method via self.shorty.
Quantize::parse does not set self.status.parsed before jumping to Quantize::isQuantize.
Thus, assertion inside Quantize::isQuantize fails.

To Reproduce
Try to convert the following model: https://tfhub.dev/google/lite-model/magenta/arbitrary-image-stylization-v1-256/fp16/prediction/1

Detailed steps to reproduce:

  1. git clone https://github.com/jackwish/tflite2onnx
  2. cd tflite2onnx
  3. python3 -m virtualenv /tmp/onnx-env
  4. source /tmp/onnx-env/bin/activate
  5. python3 -m pip install -r requirements.txt
  6. python3 setup.py install
  7. wget https://storage.googleapis.com/tfhub-lite-models/google/lite-model/magenta/arbitrary-image-stylization-v1-256/fp16/prediction/1.tflite
  8. tflite2onnx ./1.tflite /tmp/out.onnx

Full log:

$ tflite2onnx ./1.tflite /tmp/out.onnx
2020-12-14 20:06:08,918 D [tflite2onnx][convert.py:37] tflite: ./1.tflite
2020-12-14 20:06:08,918 D [tflite2onnx][convert.py:38] onnx: /tmp/out.onnx
2020-12-14 20:06:08,924 D [tflite2onnx][model.py:21] Parsing the Model...
2020-12-14 20:06:08,925 D [tflite2onnx][graph.py:58] Parsing the Graph...
2020-12-14 20:06:08,925 D [tflite2onnx][graph.py:61] Parsing operator: 0
Traceback (most recent call last):
  File "/tmp/onnx-env/bin/tflite2onnx", line 14, in <module>
    load_entry_point('tflite2onnx==0.3.0', 'console_scripts', 'tflite2onnx')()
  File "/tmp/onnx-env/lib64/python3.7/site-packages/tflite2onnx-0.3.0-py3.7.egg/tflite2onnx/convert.py", line 55, in cmd_convert
  File "/tmp/onnx-env/lib64/python3.7/site-packages/tflite2onnx-0.3.0-py3.7.egg/tflite2onnx/convert.py", line 44, in convert
  File "/tmp/onnx-env/lib64/python3.7/site-packages/tflite2onnx-0.3.0-py3.7.egg/tflite2onnx/model.py", line 39, in convert
  File "/tmp/onnx-env/lib64/python3.7/site-packages/tflite2onnx-0.3.0-py3.7.egg/tflite2onnx/model.py", line 31, in parse
  File "/tmp/onnx-env/lib64/python3.7/site-packages/tflite2onnx-0.3.0-py3.7.egg/tflite2onnx/graph.py", line 63, in parse
  File "/tmp/onnx-env/lib64/python3.7/site-packages/tflite2onnx-0.3.0-py3.7.egg/tflite2onnx/op/quantize.py", line 33, in parse
  File "/tmp/onnx-env/lib64/python3.7/site-packages/tflite2onnx-0.3.0-py3.7.egg/tflite2onnx/op/common.py", line 122, in shorty
  File "/tmp/onnx-env/lib64/python3.7/site-packages/tflite2onnx-0.3.0-py3.7.egg/tflite2onnx/op/quantize.py", line 25, in type
  File "/tmp/onnx-env/lib64/python3.7/site-packages/tflite2onnx-0.3.0-py3.7.egg/tflite2onnx/op/quantize.py", line 29, in isQuantize
AssertionError

Additional context
Possible workaround:

diff --git a/tflite2onnx/op/quantize.py b/tflite2onnx/op/quantize.py
index ced5559..8c67f6f 100644
--- a/tflite2onnx/op/quantize.py
+++ b/tflite2onnx/op/quantize.py
@@ -26,8 +26,11 @@ class Quantize(Operator):
 
     @property
     def isQuantize(self):
-        assert(self.status.parsed)
-        return self.inputs[0].dtype is TensorProto.FLOAT
+        if self.status.parsed:
+            return self.inputs[0].dtype is TensorProto.FLOAT
+        else:
+            opcode = self.model.OperatorCodes(self.tflite.OpcodeIndex()).BuiltinCode()
+            return opcode is tflite.BuiltinOperator.QUANTIZE
 
     def parse(self):
         logger.debug("Parsing %s...", self.shorty)

Most converted models fail ONNX optimization

Describe the bug
ONNX models resulting from tflite conversion fail ONNX optimization process. ONNX checker does not report any problem, though.

To Reproduce

  1. Download posenet_mobilenet_float_075_1_default_1.tflite model from here: posenet_mobilenet_float
  2. Convert it: tflite2onnx posenet_mobilenet_float_075_1_default_1.tflite test.onnx
  3. Optimize resulting model with attached script optimize_model.py.gz: python optimize_model.py --input test.onnx
  4. Obscure error is reported by ONNX optimizer
Processing model: test.onnx
Traceback (most recent call last):
  File "optimize_model.py", line 26, in <module>
    optimized_model = optimizer.optimize(model, ['eliminate_deadend'])
  File "/home/daz/anaconda3/envs/atonn/lib/python3.7/site-packages/onnx/optimizer.py", line 55, in optimize
    optimized_model_str = C.optimize(model_str, passes)
IndexError: _Map_base::at

Operator request: TFLite OP: 102 SPLIT_V

Note: Currently, we only accept TensorFlow Lite builtin operators request.

What is the TensorFlow Lite operator you need
(Please attach a tiny TFLite model *.tflite, which contains the operator you need ONLY.)

TFLite OP: 102 SPLIT_V

What kind of service you are trying to deploy your model?

Audio

Would you like to contribute the operator?

Maybe?

Additional context
Add any other context about the problem here.

Multiple graph suppoort

Initially, we don't support multiple graphs currently for two reasons:

  • We have not seen TFLite models that contain more than one graph
  • ONNX doesn't support multiple graphs in its representation

However, there are indeed models that may contain multiple graphs. #51 is an example (though not a good one).

In this case, we can extend our functionality in two steps:

  1. Convert one graph each time tflite2onnx is invoked. An interface to specify which graph to be converted will be added.
  2. Generate all graphs at one time - introduce another level to wrap up Model rather than Graph.

This can be added to our roadmap but doesn't have a very high priority.

Operator Support: Convolution

Convolution is so complated, we need to enable them step by step.

  • Trival support
  • Strides
  • Group (Depthwise dedicated in TFLite)
  • Dilation
  • Padding

transfrom_axis = [input[p] for p in perm] IndexError: list index out of range

File "D:\anaconda3\lib\site-packages\tflite2onnx\layout.py", line 27, in transform
output = transform(input, self.source, self.target)
File "D:\anaconda3\lib\site-packages\tflite2onnx\layout.py", line 16, in transform
transfrom_axis = [input[p] for p in perm]
File "D:\anaconda3\lib\site-packages\tflite2onnx\layout.py", line 16, in
transfrom_axis = [input[p] for p in perm]
IndexError: list index out of range
python-BaseException

Quantization Semantic Convert

  • TFLite
    • Many of TFLite operators have quantized (integer) version.
    • Trivial arithmetic is computed directly, while complex ones are simulated by Gemmlowp.
  • ONNX (wiki): 8bit scale/zero point linear quantization. Limited operator support.
    • Different from TFLite, ONNX defined only quantized Conv and MatMul only.
    • It seems other operators needs to run in float mode. Quantize/dequantize when switch between float and quantized operators.
    • The Integer operators (ConvInteger and MatMulInteger) act as part of a quantized operator - with requantization removed.
      Given the status, I am not even interested translating TFLite quantization to ONNX - the later one is so poor.
  • ONNXRuntime Quantization Tool
    • Actually very limited capability when compared with TensorFlow stack
    • Partialy because ONNX operator definition is very limited.
    • May consider integrate the quantization tool into tflite2onnx - if there were license issue?

Basic conversion failing

I wanted to convert a simple tflite into onnx model. I am hit with the issue [1] below. Below is the procedure to reproduce with a public model:
a) Download mobilenet from [2]
b) Setup a virtualenv (python3.6) with tensorflow(1.15.2 & 2.3 show same issue)
c) pip install tflite2onnx
d) tflite2onnx mobilenet_v1_1.0_224_quant.tflite .

[1] onnx.onnx_cpp2py_export.checker.ValidationError: Unrecognized data_type (tensor name: TFLITE2ONNX_Scalar_uint8_151): 2
[2] https://storage.googleapis.com/download.tensorflow.org/models/tflite/mobilenet_v1_1.0_224_quant_and_labels.zip

Can anyone pls help in letting me know if I am doing something wrong in using the tool?

Require Padding inference

We use auto_pad for most padding related operator, however, ONNX Runtime doesn't support dilation with SAME* auto padding currently.

onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Conv node. Name:'output_NCHW_to_NHWC_nchwc_1' Status Message: /onnxruntime_src/onnxruntime/core/providers/cpu/nn/conv_attributes.h:38 onnxruntime::common::Status onnxruntime::ComputePadAndOutputShape(int64_t, int64_t, int64_t, int64_t, onnxruntime::AutoPadType, int64_t*, int64_t*, int64_t*) [with bool ForceSymmetricAutoPadding = false; int64_t = long int] dilation == 1 was false. Dilation not supported for AutoPadType::SAME_UPPER or AutoPadType::SAME_LOWER.

Implementing compound operators: HardSwish

HardSwish does not have a direct equivalent in ONNX. I intend to implement it as a hardsigmoid followed by mul. My question is with the current architecture how do you define an operator that doesn't map to a built-in ONNX operator? What will @Property type(self) of the operator definition return?

Use case survey

It's significant to understand the use cases of our users. It will help us to decide how to shape tflite2onnx - what features we take as first priority. Please help to provide some information if possible. A possible template is as below, please feel free to propose new fields or ignore some. :)

How you generate/get the TFLite model?

Example: use TensorFlow quantization-aware training, or get it from mediapipe

Which ONNX backend you are going to use?

Example: TensorRT

What is the problem you try to solve?

Example: flower classification

Mediapipe FP16 model converted to ONNX - results do not match.

I converted the publicly available tflite model from mediapipe here using the latest checkout with the FP16 Quantizatoin Pattern Folding (version 0.3.1).
I then used the following code to compare the outputs of the two equivalent models using the tflite interpreter and onnxruntime. The outputs do not match. Still trying to track this down, but reporting the issue here. Attaching code below to compare the outputs from the two inferences.

`
import numpy as np
import tflite_runtime.interpreter as tflite

import onnxruntime

def onnx_infer(model_file, input_shape, input_data):
sess = onnxruntime.InferenceSession(model_file)
#print(sess.get_inputs()[0])
#print(sess.get_outputs()[0])
required_shape = sess.get_inputs()[0].shape
#print('converting input_data with shape {} to shape {}'.format(input_shape, required_shape))
input_data = input_data.reshape(required_shape)
ort_inputs = {sess.get_inputs()[0].name: input_data}
ort_outs = sess.run(None, ort_inputs)
return ort_outs[0]

def tflite_infer(model_file):
interpreter = tflite.Interpreter(model_path=model_file)
interpreter.allocate_tensors()

input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()

input_shape = input_details[0]['shape']
input_data = np.array(np.random.random_sample(input_shape), dtype=np.float32) # range [0,1)

interpreter.set_tensor(input_details[0]['index'], input_data)

interpreter.invoke()

output_data = interpreter.get_tensor(output_details[0]['index'])
results = np.squeeze(output_data)

return input_data, input_shape, results

if name == 'main':
import argparse
parser = argparse.ArgumentParser()
parser.add_argument(
'-tfm',
'--tflite_model_file',
required=True,
help='.tflite model to be executed')
parser.add_argument(
'-ofm',
'--onnx_model_file',
required=True,
help='.onnx model to be executed')

args = parser.parse_args()

input_data, input_shape, results = tflite_infer(args.tflite_model_file)
#print('input_shape :', input_shape)
#print('input_data :', input_data)
#print('tflite results :', results)

results_onnx = onnx_infer(args.onnx_model_file, input_shape, input_data)
#print('onnx results :', results_onnx)

print('outputs ', 'match' if np.allclose(results, results_onnx) else 'don\'t match')
#print(np.linalg.norm(results - results_onnx))

`

Fold "FP16 weights -> Dequantize -> FP32 -> Conv/MatMul..." to "FP32 weights -> Conv/MatMul..."

ONNX quantization doesn't take FP16 as a quantized data type, therefore nearly all FP16 quantized TFLite models are unsupported (see this FAQ).

We recommend user switch to full integer quantization. But if the user cannot do that (for example no TensorFlow model available), we can fold "FP16 weights -> Dequantize -> FP32 -> Conv/MatMul..." to "FP32 weights -> Conv/MatMul..." to workaround this issue.

Rank of shape of Reshape operator

2rank-reshape

In the Squeezenet hosted by TensorFlow, we found that the Reshape of the model has a 2 rank shape input, which is not supported by ONNXRuntime (as of 1.4.0).

onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Reshape node. Name:'' Status Message: /Users/runner/work/1/s/onnxruntime/core/providers/cpu/tensor/reshape.h:24 virtual onnxruntime::common::Status onnxruntime::Reshape::Compute(onnxruntime::OpKernelContext *) const shapeTensor->Shape().NumDimensions() == 1 was false. A shape tensor must be a vector tensor.

This shape of Reshape is pre-computed by TensorFlow Lite converter from the pattern below. Where AvgPool - Shape should outputs [1, 1, 1, 1001], then StridedSlice to [1], and Pack (now Stack) with [-1,] to be [[1,], [-1]] which is 2 rank.
shape-tf

Everything looks good in the path of TensorFlow -> TensorFlow Lite -> TFLITE2ONNX -> ONNX, but ONNXRuntime doesn't support all scenarios. Maybe we need to find a workaround for it, for example, always flatten the shape for Reshape.

Testing becomes a problem

The test currently involves steps and issues:

  1. Downloads a *.tflite file from web
    • As we adding more operators, it would be a prolem to update the test model server.
    • Not sure if we can generate *.tflite on the fly, and test with it.
  2. Converts it into a *.onnx
  3. Run with TensorFlow Lite and ONNX runtime, and compare the results.
    • The main issue here is data layout, for which shrub assumes ONNX using NCHW and TensorFlow Lite using NHWC.
    • However, this is a operator specific semantic. For example, Convolution involes data layout semantic, but elementwise operation doesn't.

AssertionError for DepthMultiplier

I am trying to convert this tflite model to ONNX but end up in an AssertionError. I am using python3.8.

Traceback (most recent call last):
  File "/home/paulgavrikov/.local/bin/tflite2onnx", line 8, in <module>
    sys.exit(cmd_convert())
  File "/home/paulgavrikov/.local/lib/python3.8/site-packages/tflite2onnx/convert.py", line 55, in cmd_convert
    convert(args.tflite_path, args.onnx_path)
  File "/home/paulgavrikov/.local/lib/python3.8/site-packages/tflite2onnx/convert.py", line 44, in convert
    model.convert(explicit_layouts)
  File "/home/paulgavrikov/.local/lib/python3.8/site-packages/tflite2onnx/model.py", line 39, in convert
    self.parse()
  File "/home/paulgavrikov/.local/lib/python3.8/site-packages/tflite2onnx/model.py", line 31, in parse
    g.parse()
  File "/home/paulgavrikov/.local/lib/python3.8/site-packages/tflite2onnx/graph.py", line 63, in parse
    op.parse()
  File "/home/paulgavrikov/.local/lib/python3.8/site-packages/tflite2onnx/op/conv.py", line 81, in parse
    assert(option.DepthMultiplier() == 1)
AssertionError

Command line tool: Help message should include version number

Describe the bug
When calling tflite2onnx -h it'd be useful to see the version number.

Expected behavior

tflite2onnx, Version 0.3.1 by jackwish, https://jackwish.net/tflite2onnx/
Convert TensorFlow Lite models to ONNX models

usage: tflite2onnx [-h] tflite_path onnx_path

positional arguments:
  tflite_path  Path to the input TFLite mode
  onnx_path    Path to save the converted ONNX mode

optional arguments:
  -h, --help   show this help message and exit

Actual behavior

usage: tflite2onnx [-h] tflite_path onnx_path

Convert TensorFlow Lite models to ONNX models

positional arguments:
  tflite_path  Path to the input TFLite mode
  onnx_path    Path to save the converted ONNX mode

optional arguments:
  -h, --help   show this help message and exit

To Reproduce

  1. Invoke tflite2onnx -h

Version
v0.3.1

Reshape - spatial semantic removed but post operators need it still

Describe the bug
Reshape is executed in C-like by default, i.e. with the last axis index changing fastest, back to the first axis index changing slowest. Therefore, in a condition that there are implicit layout operators (such as Conv) before the Reshape, different input layout will affect the reshaped result e.g. (1,4,4,8) -> (1, 128) is different to (1,8,4,4) -> (1, 128). Although the output shape is the same, the entries in the tensor will be in a different order.
(onnx Reshape is similiar to numpy.reshape. A simple numpy example shown in the image below describes this condition, the output of a.reshape((1, 12)) and b.reshape((1, 12)) is different)

Possible solution
A possible solution is to add an additional Transpose before the Reshape in this certain condition.
I have developed a naive fakeTranspose (mimicking the fakeBroadingcast) method to add a Transpose before all the Reshape currently (shown in the image below). But it indeed needs a further judgment for whether there are implicit layout operators before this 'Reshape'.

How do you think about this solution? @jackwish And it will be great if you can share some suggestions about how to judge for this specific case (actually quite common)? Thanks!

Additional context
The bug seems "escaped" from the current graph level tests. I didn't' notice this problem until feeding the same input into the .tflite model and its corresponding.onnx, comparing the exact output values between them. Maybe it will be great if we can add some tests comparing the exact output values before and after in addition to the computation graph.

Objects management

Overall, we tflite2onnx needs to manage objects including:

  • TensorFlow Lite objects which the semantics come from. These objects should be rarely used after they have been parsed into tflite2onnx's IR (Intermediate Representation).
  • tflite2onnx's IR which mainly aims to handle data layout issue. In the very first draft, we don't have any real IR as we are simply translating everything we saw into ONNX objects, such as tensors, operators... However, the default data layout NHWC vs. NCHW of TensorFlow Lite and ONNX, we need to build real graph of a network, and walk on it to handle the data layout issue. For this, we mainly needs to manage a graph IR.
  • ONNX objects that we need to build the final model. Tensors, operators, and graphes.

Conversion of model fails with assertion in Quantize operator

Describe the bug
When I try to convert segm_full_v491.zip on the command line, I get the following traceback:

<class 'tflite2onnx.op.quantize.Quantize'>
Traceback (most recent call last):
  File "/home/de/.local/bin/tflite2onnx", line 8, in <module>
    sys.exit(cmd_convert())
  File "/home/de/.local/lib/python3.8/site-packages/tflite2onnx/convert.py", line 55, in cmd_convert
    convert(args.tflite_path, args.onnx_path)
  File "/home/de/.local/lib/python3.8/site-packages/tflite2onnx/convert.py", line 44, in convert
    model.convert(explicit_layouts)
  File "/home/de/.local/lib/python3.8/site-packages/tflite2onnx/model.py", line 39, in convert
    self.parse()
  File "/home/de/.local/lib/python3.8/site-packages/tflite2onnx/model.py", line 31, in parse
    g.parse()
  File "/home/de/.local/lib/python3.8/site-packages/tflite2onnx/graph.py", line 63, in parse
    op.parse()
  File "/home/de/.local/lib/python3.8/site-packages/tflite2onnx/op/quantize.py", line 33, in parse
    logger.debug("Parsing %s...", self.shorty)
  File "/home/de/.local/lib/python3.8/site-packages/tflite2onnx/op/common.py", line 122, in shorty
    return '[%s](%s)' % (self.name, self.type)
  File "/home/de/.local/lib/python3.8/site-packages/tflite2onnx/op/quantize.py", line 25, in type
    return 'QuantizeLinear' if self.isQuantize else 'DequantizeLinear'
  File "/home/de/.local/lib/python3.8/site-packages/tflite2onnx/op/quantize.py", line 29, in isQuantize
    assert(self.status.parsed)
AssertionError

To Reproduce

  1. Extract the model linked above
  2. Execute tflite2onnx segm_full_v491.tflite segm_full_v491.onnx

Additional context
The model is the background segmentation from Google Meet

tflite convert to onnx error:Unsupported TFLite OP: 127

Describe the bug
when I convert a tf model to tflite and use tensorflow op with

  5 converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS,
  6                                        tf.lite.OpsSet.SELECT_TF_OPS]
  7 converter.allow_custom_ops = True

and I managed to get the tflite file, but when convert tflite to onnx ,I get the error:

NotImplementedError: Unsupported TFLite OP: 127

since I think we do not need the concrete implementation of OP in the process of converting to ONNX file,
so, Is there a way that I can get the ONNX file normally?

thank you very much

INT8 quantization support

So far, we have UINT8 quantization enabled. Per #30 (comment) (thanks for @kapulkin), we found that TensorFlow 2.0 prioritizes the INT8 approach over UINT8 (though many tools and kernels are UINT8 based currently).

We plan to enable INT8 quantization in the next release cycle (I mean something like v0.4.0). This issue tracks related topics.

how to choose opset version?

Some platform need opset version==10 to run the onnx model, but that of the converted onnx model is 11. How to change the opset version?

Type Error: Type 'tensor(int32)' of input parameter (up_sampling2d/Size) of operator (Resize) in node (up_sampling2d) is invalid.

use tflite2onnx to transform the tflite model to onnx model
and I test the onnx model, but error is acoureed:
onnxruntime.capi.onnxruntime_pybind11_state.InvalidGraph: [ONNXRuntimeError] : 10 : INVALID_GRAPH : Load model from ../models/tflite/palm_detection.onnx failed:This is an invalid model. Type Error: Type 'tensor(int32)' of input parameter (up_sampling2d/Size) of operator (Resize) in node (up_sampling2d) is invalid.

Quantize class requires being parsed in the first line of parse method

Bug description

The bug is in first line with debug message if Quantize.parse method. Quantize.parse falls with assertion error.

    @property
    def type(self):
        return 'QuantizeLinear' if self.isQuantize else 'DequantizeLinear' # requires being parsed indirectly

    @property
    def isQuantize(self):
        assert(self.status.parsed) # there is a requirement
        return self.inputs[0].dtype is TensorProto.FLOAT

    def parse(self):
        logger.debug("Parsing %s...", self.shorty) # shortly is a method in base class Operator, that reads type property
        op = self.tflite
        opcode = self.model.OperatorCodes(op.OpcodeIndex()).BuiltinCode()
        assert(opcode is tflite.BuiltinOperator.QUANTIZE or tflite.BuiltinOperator.DEQUANTIZE)

        assert(op.InputsLength() == 1)
        assert(op.OutputsLength() == 1)
        self.parseInput(0)
        self.parseOutput(0)

        self.setParsed()

Operator.shorty

    @property
    def shorty(self):
        return '[%s](%s)' % (self.name, self.type)

To Reproduce

  1. call tflite2onnx.convert on quantized model
  2. debug log
[2020-12-26 21:28:26,239] DEBUG  tflite: data/u-net-model.tflite
[2020-12-26 21:28:26,239] DEBUG  onnx: data/u-net-resnet18-model.onnx
[2020-12-26 21:28:26,248] DEBUG  Parsing the Model...
[2020-12-26 21:28:26,248] DEBUG  Parsing the Graph...
[2020-12-26 21:28:26,248] DEBUG  Parsing operator: 0
<class 'tflite2onnx.op.quantize.Quantize'>
Traceback (most recent call last):
  File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/Users/stask/projects/motionLearning/chat/unet/broccole/convertTfliteToOnnx.py", line 37, in <module>
    main()
  File "/Users/stask/projects/motionLearning/chat/unet/broccole/convertTfliteToOnnx.py", line 34, in main
    tflite2onnx.convert(tfliteFilePath, onnxFilePath)
  File "/Users/stask/Library/Python/3.8/lib/python/site-packages/tflite2onnx/convert.py", line 44, in convert
    model.convert(explicit_layouts)
  File "/Users/stask/Library/Python/3.8/lib/python/site-packages/tflite2onnx/model.py", line 39, in convert
    self.parse()
  File "/Users/stask/Library/Python/3.8/lib/python/site-packages/tflite2onnx/model.py", line 31, in parse
    g.parse()
  File "/Users/stask/Library/Python/3.8/lib/python/site-packages/tflite2onnx/graph.py", line 63, in parse
    op.parse()
  File "/Users/stask/Library/Python/3.8/lib/python/site-packages/tflite2onnx/op/quantize.py", line 33, in parse
    logger.debug("Parsing %s...", self.shorty)
  File "/Users/stask/Library/Python/3.8/lib/python/site-packages/tflite2onnx/op/common.py", line 122, in shorty
    return '[%s](%s)' % (self.name, self.type)
  File "/Users/stask/Library/Python/3.8/lib/python/site-packages/tflite2onnx/op/quantize.py", line 25, in type
    return 'QuantizeLinear' if self.isQuantize else 'DequantizeLinear'
  File "/Users/stask/Library/Python/3.8/lib/python/site-packages/tflite2onnx/op/quantize.py", line 29, in isQuantize
    assert(self.status.parsed)
AssertionError

Operator request: quantized SSD MobileNet

Note: Currently, we only accept TensorFlow Lite builtin operators request.

What is the TensorFlow Lite operator you need
During the conversion I obtain :"NotImplementedError: Unsupported TFLite OP: 14"
The .tflite file is available here:
http://storage.googleapis.com/download.tensorflow.org/models/tflite/coco_ssd_mobilenet_v1_1.0_quant_2018_06_29.zip

What kind of service you are trying to deploy your model?
Object detection

Would you like to contribute the operator?
Yes

Additional context
Add any other context about the problem here.

Installation via pip git clone fails

Describe the bug
It is not possible to install tflite2onnx via pip install git+<REPO> due to AttributeError: module 'tflite.TensorType' has no attribute 'BOOL'

To Reproduce

pip3 install git+https://github.com/jackwish/tflite2onnx.git
Collecting git+https://github.com/jackwish/tflite2onnx.git
  Cloning https://github.com/jackwish/tflite2onnx.git to /tmp/pip-req-build-zjsf07kd
  Running command git clone -q https://github.com/jackwish/tflite2onnx.git /tmp/pip-req-build-zjsf07kd
    ERROR: Command errored out with exit status 1:
     command: /usr/bin/python3 -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-req-build-zjsf07kd/setup.py'"'"'; __file__='"'"'/tmp/pip-req-build-zjsf07kd/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-req-build-zjsf07kd/pip-egg-info
         cwd: /tmp/pip-req-build-zjsf07kd/
    Complete output (17 lines):
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "/tmp/pip-req-build-zjsf07kd/setup.py", line 2, in <module>
        import tflite2onnx
      File "/tmp/pip-req-build-zjsf07kd/tflite2onnx/__init__.py", line 3, in <module>
        from tflite2onnx.convert import convert
      File "/tmp/pip-req-build-zjsf07kd/tflite2onnx/convert.py", line 7, in <module>
        from tflite2onnx.model import Model
      File "/tmp/pip-req-build-zjsf07kd/tflite2onnx/model.py", line 7, in <module>
        from tflite2onnx.graph import Graph
      File "/tmp/pip-req-build-zjsf07kd/tflite2onnx/graph.py", line 6, in <module>
        from tflite2onnx.tensor import TensorFactory
      File "/tmp/pip-req-build-zjsf07kd/tflite2onnx/tensor.py", line 7, in <module>
        from tflite2onnx import mapping
      File "/tmp/pip-req-build-zjsf07kd/tflite2onnx/mapping.py", line 35, in <module>
        TensorType.BOOL: 'bool',
    AttributeError: module 'tflite.TensorType' has no attribute 'BOOL'
    ----------------------------------------
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.

Version
Which version are you using? master.

Additional context
Ubuntu 20.04, python3.8.6

Operator Support Status

This issue aims to collect the requirements, and enabling progress. The full list of supported operators is list in document of Operator Support Status. Developpers are responsible to update it accordingly.

If you need an operator, please provide the minimized TensorFlow Lite model (*.tflite), which shall contain the operator only and have a small tensor size (as we host TFLite models in repo currently, we'd like to keep it as small as possible).

It woule be great if you can take sometime to enable them, which is pretty simple. Please check how to enable new operator for more help.

Enquiry about parsing optional input

Hi jackwish,

Thanks for sharing your awesome frame!
I am trying to enable a new operator Resize from resize_bilinear, the ONNX description of Resize is shown below:
image
Only one of 'scales' and 'size' should be specified in this case, I am not sure how to 'skip' one of the arguments (also the roi) when implementing the parse part, because it seems the inputs are store in a list without a key.
I am wondering if you could share any suggestions about it?

Thanks,
Mingrui

Installation via build-wheel.sh fails

Describe the bug
It is not possible to install tflite2onnx via the steps described in the readme.

Download the source code: git clone https://github.com/jackwish/tflite2onnx.git
Build the package: ${tflite2onnx}/scripts/build-wheel.sh

To Reproduce

paulgavrikov@paulgavrikov-OMEN-by-HP-Laptop-15-dh1xxx:~$ git clone https://github.com/jackwish/tflite2onnx.git
Cloning into 'tflite2onnx'...
remote: Enumerating objects: 168, done.
remote: Counting objects: 100% (168/168), done.
remote: Compressing objects: 100% (126/126), done.
remote: Total 2249 (delta 90), reused 90 (delta 39), pack-reused 2081
Receiving objects: 100% (2249/2249), 140.72 MiB | 2.55 MiB/s, done.
Resolving deltas: 100% (1593/1593), done.

paulgavrikov@paulgavrikov-OMEN-by-HP-Laptop-15-dh1xxx:~$ tflite2onnx/scripts/build-wheel.sh 
tflite2onnx/scripts/build-wheel.sh: line 10: pip: command not found
  File "/home/paulgavrikov/tflite2onnx/setup.py", line 17
SyntaxError: Non-ASCII character '\xe7' in file /home/paulgavrikov/tflite2onnx/setup.py on line 17, but no encoding declared; see http://python.org/dev/peps/pep-0263/ for details

Version
Which version are you using? master.

Additional context
Ubuntu 20.04, python3.8.6

"NotImplementedError: This path has not been tried" Error

I have been trying to convert BlitzNet from tflite to onnx.
BlitzNet is provided in Tf, so i converted to tflite and now to onnx.
On the process I get following error:

Comman line: tflite2onnx ./blitz_seg.tflite ./onnx/model_seg_tflite.onnx
Empty tensor used, please double confirm your code path!
Traceback (most recent call last):
File "/home/juhyun/Tf/bin/tflite2onnx", line 8, in
sys.exit(cmd_convert())
File "/home/juhyun/Tf/lib/python3.6/site-packages/tflite2onnx/convert.py", line 58, in cmd_convert
convert(args.tflite_path, args.onnx_path)
File "/home/juhyun/Tf/lib/python3.6/site-packages/tflite2onnx/convert.py", line 44, in convert
model.convert(explicit_layouts)
File "/home/juhyun/Tf/lib/python3.6/site-packages/tflite2onnx/model.py", line 39, in convert
self.parse()
File "/home/juhyun/Tf/lib/python3.6/site-packages/tflite2onnx/model.py", line 31, in parse
g.parse()
File "/home/juhyun/Tf/lib/python3.6/site-packages/tflite2onnx/graph.py", line 64, in parse
op.parse()
File "/home/juhyun/Tf/lib/python3.6/site-packages/tflite2onnx/op/resize.py", line 102, in parse
raise NotImplementedError("This path has not been tried")
NotImplementedError: This path has not been tried

I have tried not using command line, but same error. There is so little explanation of how to use tflite2onnx, so some explanation would be really thankful!
Screen Shot 2021-04-14 at 2 56 02 AM

"This path has not been tried" Error

I have been trying to convert BlitzNet from tflite to onnx.
BlitzNet is provided in Tf, so i converted to tflite and now to onnx.
On the process I get following error:

Comman line: tflite2onnx ./blitz_seg.tflite ./onnx/model_seg_tflite.onnx
Empty tensor used, please double confirm your code path!
Traceback (most recent call last):
File "/home/juhyun/Tf/bin/tflite2onnx", line 8, in
sys.exit(cmd_convert())
File "/home/juhyun/Tf/lib/python3.6/site-packages/tflite2onnx/convert.py", line 58, in cmd_convert
convert(args.tflite_path, args.onnx_path)
File "/home/juhyun/Tf/lib/python3.6/site-packages/tflite2onnx/convert.py", line 44, in convert
model.convert(explicit_layouts)
File "/home/juhyun/Tf/lib/python3.6/site-packages/tflite2onnx/model.py", line 39, in convert
self.parse()
File "/home/juhyun/Tf/lib/python3.6/site-packages/tflite2onnx/model.py", line 31, in parse
g.parse()
File "/home/juhyun/Tf/lib/python3.6/site-packages/tflite2onnx/graph.py", line 64, in parse
op.parse()
File "/home/juhyun/Tf/lib/python3.6/site-packages/tflite2onnx/op/resize.py", line 102, in parse
raise NotImplementedError("This path has not been tried")
NotImplementedError: This path has not been tried

I have tried not using command line, but same error. There is so little explanation of how to use tflite2onnx, so some explanation would be really thankful!
Screen Shot 2021-04-14 at 2 56 02 AM

Interface to retrive the list of supported operators

We maintain the list manually in this list: https://github.com/jackwish/tflite2onnx/blob/master/docs/operator-support-status.md
Every time we add a new operator, we need to update the list. It should be possible to get the list automatically via an interface such as getSupportedOprators() which return a list of the strings.

This can be done by iterate the TypeMapping of the operator converter classes, and bridge them with tflite.BuiltinOperator. We can manually build a full list of TFLite built-in operators (in string format), then select the supported operators. Should be easy.

Potential bug: Reshape(one input, one attr) operator is not supported

Describe the bug
It seems there is another format reshape operator shown below, one input and one attribute, violating assert(op.InputsLength() == 2) in the reshape.py.

image

To Reproduce
model file: palm_detection.tflite

Additional context
To reproduce this bug, there are two operators that have not been supported (prelu, ResizeBilinear) in this model. I have implemented them locally and will try to send a pull request.

Broadcasting

Going further, with propagation based layout handling, we have a broadcasting issue for Binary operators.

  • If a Binary operator, say add, is not layout propagated, everything goes.
  • Sometimes, propagation like NHWC -> NCHW breaks broadcastable.

For example, given TFLite as Add with input shapes (2, 3, 4, 5) and (5,) which is broadcastable. If no layout propagation, ONNX supports such case. And even if the shapes are (4, 2, 3, 5) and (5,) after propagaion, it's fine still. However, for NHWC -> NCHW which we need mostly, shapes will be (2, 5, 3, 4) and (5,) - no longer broadcastable.

Call for Contribution

Greetings!

I'd like to thank you for your attention and contribution to tflite2onnx. That means a lot to the project and to me.

I have been putting efforts into the tflite2onnx for about one year, including the codebase, the doc, the CI, answering questions, fixing reported bugs, and etc. As of early 2021, we have released four versions and are planning v0.3.2. I am very delighted that it does help the community a bit.

As you may know, tflite2onnx is a side project for me, and it does take time to push every single feature forward. I am sort of overloaded by the growing feature requests as I am running the shrub (the test infrastructure) and tflite (the TFLite parser) in parallel. Thankfully, we received much help from the community, namely @briangrifiin @erizmr @IkbeomJeon @paulgavrikov, and everyone who is asking questions.

To push and accelerate the evolution of tflite2onnx, I am calling for contribution to this package:

  • Operator enabling - we need to extend our support of TFLite operators.
  • Issue debugging - when an error was raised, check the detail.
  • Feature development - check them in the issues.

Don't know where to start? Check the To Do column of the dashboard of the v0.4.0 release. We have rich documents and I am always here to provide full support. Please join and push forwards!

Cheers!

ERROR Source ~/tflite2onnx does not appear to be a Python project: no pyproject.toml or setup.py

Describe the bug

Get ERROR when run statement in build script: ERROR Source ~/tflite2onnx does not appear to be a Python project: no pyproject.toml or setup.py, this error raised by python -m build --outdir ${root_dir}/assets/dist.

To Reproduce

  1. git clone https://github.com/jackwish/tflite2onnx.git && cd tflite2onnx
  2. ./scripts/build-wheel.sh

Version

Which version are you using? latest tflite2onnx version.

Additional context

I'm in a virtual environment created by pyenv+virtualenv, python version is 3.7.4 on WSL.

Project infrastructure

Improve the infrastructure for better engineering and management.

  • Covarage report with GitHub Actions defer to when we open source
  • Contribution guide
    • Guide to add operator
    • Issue template
  • Command line interface
  • shrub needs to support preset data layout - ONNX can run with either NCHW or NHWC data.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.