GithubHelp home page GithubHelp logo

pinto0309 / onnx2tf Goto Github PK

View Code? Open in Web Editor NEW
571.0 9.0 61.0 4.18 MB

Self-Created Tools to convert ONNX files (NCHW) to TensorFlow/TFLite/Keras format (NHWC). The purpose of this tool is to solve the massive Transpose extrapolation problem in onnx-tensorflow (onnx-tf). I don't need a Star, but give me a pull request.

License: MIT License

Python 99.90% Dockerfile 0.10%
deep-learning machine-learning model-converter models onnx tensorflow tensorflow-lite tflite docker onnx-tensorflow

onnx2tf's Issues

Yolov7-tiny to TensorflowLite conversion results in a dynamic output model incompatible with TfLite Java API

Issue Type

Others

onnx2tf version number

1.5.36

onnx version number

1.12.0

tensorflow version number

2.10.1

Download URL for ONNX

pip install onnx==1.12.0

Parameter Replacement JSON

none

Description

Hi, your library is awesome!

I converted the Yolov7-tiny from PyTorch to TfLite using:
onnx2tf -i yolov7-tiny.onnx -o models-NHWC-final/ -osd -oh5 -cotof

I am trying to use it on an android device. The model works when tested on a PC however the Tensorflow Java API for Android does not support dynamic output models according to their documentation: https://www.tensorflow.org/lite/guide/inference. While the resulting yolo tflite model has a dynamic number of outputs( the number of outputs change with the number of indications/detentions)

On the other hand, if I follow the conversion path PyTorch -> ONNX-> Tensorflow I do get yolov7 with a fixed output size so I suspect it is possible to achieve this with onnx2tf as well while also doing the NcHW to NHWc conversion in the process.

Is there a way to have onnx2tf output a fixed/static output .tflite model for yolov7-tiny?

Thank you

[MobileFormer]Dimensions must be equal [Add Layer]

Issue Type

Others

onnx2tf version number

1.4.2

onnx version number

1.12.0

tensorflow version number

2.10.1

Download URL for ONNX

https://drive.google.com/file/d/1vGzO9MZGX-yGz6ATm4yHVJMASZACuy2t/view?usp=share_link

Parameter Replacement JSON

{
  "format_version": 1,
  "operations": [
    {
      "op_name": "Add_282",
      "param_target": "pre_process_transpose_perm", 
      "param_name": "perm",
      "values": [0, 3, 1, 2]
    }
  ]
}

Description

Hi, thank you so much for actively maintaining this useful repo. I am trying to convert my custom pytorch model to onnx and then to keras for further tuning. I exported the model using opset==12. The issue occurred during convertion with an ADD LAYER when I run convert('onnx/py_model.onnx').

In attempting to solve this problem, I provided the parameter replacement file to the converter but got the same issue. It looks like one input for the layer is in NHWC and the other is in NCHW for op_name Add_282 (number 1 on the screenshot) in this case. From the error, I can see that one of the input for add is in correct order(NCHW) but the other one is in NHWC. So I look back to the operation Add_263 (number 2 on the screenshot) that is responsible for generating the wrong input and I found that the keras input for this operation is already in NHWC. I did not trace further up because I think that illustrate my problem: it seems like one of the input for the Add_282 is in correct shape.

So I wonder if there is some quick fix for this, like if I forgot to set some parameter or did incorrectly for parameter replacement, or version issue? I also find #17 but at the very end you said no need to do parameter replacement for transpose operation after 1.1.38 and after opset>11. I apologize in advance if I missed something super obvious, but could you provide some tips on how to fix this? Thanks again!

Screenshot

screenshot

Traceback (most recent call last):
  File "C:\Users\qz796\anaconda3\envs\phoodify\lib\site-packages\onnx2tf\utils\common_functions.py", line 267, in print_wrapper_func
    result = func(*args, **kwargs)
  File "C:\Users\qz796\anaconda3\envs\phoodify\lib\site-packages\onnx2tf\utils\common_functions.py", line 329, in inverted_operation_enable_disable_wrapper_func
    result = func(*args, **kwargs)
  File "C:\Users\qz796\anaconda3\envs\phoodify\lib\site-packages\onnx2tf\utils\common_functions.py", line 37, in get_replacement_parameter_wrapper_func
    func(*args, **kwargs)
  File "C:\Users\qz796\anaconda3\envs\phoodify\lib\site-packages\onnx2tf\ops\Add.py", line 126, in make_node
    tf.math.add(
  File "C:\Users\qz796\anaconda3\envs\phoodify\lib\site-packages\tensorflow\python\util\traceback_utils.py", line 153, in error_handler
    raise e.with_traceback(filtered_tb) from None
  File "C:\Users\qz796\anaconda3\envs\phoodify\lib\site-packages\keras\layers\core\tf_op_layer.py", line 119, in handle
    return TFOpLambda(op)(*args, **kwargs)
  File "C:\Users\qz796\anaconda3\envs\phoodify\lib\site-packages\keras\utils\traceback_utils.py", line 70, in error_handler
    raise e.with_traceback(filtered_tb) from None
ValueError: Exception encountered when calling layer "tf.math.add_101" (type TFOpLambda).

Dimensions must be equal, but are 56 and 16 for '{{node tf.math.add_101/Add}} = AddV2[T=DT_FLOAT](Placeholder, Placeholder_1)' with input shapes: [1,56,56,16], [1,16,56,56].

Call arguments received by layer "tf.math.add_101" (type TFOpLambda):
  • x=tf.Tensor(shape=(1, 56, 56, 16), dtype=float32)
  • y=tf.Tensor(shape=(1, 16, 56, 56), dtype=float32)
  • name='Add_248'

Implementation of strict mode

Issue Type

Others

onnx2tf version number

1.7.x

onnx version number

1.13.0

tensorflow version number

2.10.0

Download URL for ONNX

N/A

Parameter Replacement JSON

N/A

Description

  1. Personal
  2. Add an ultra-slow conversion mode to strictly avoid shape unmatch errors and precision errors.

Background

Since the internal processing has already implemented a number of fairly complex workarounds to avoid conversion errors, we have not dared to implement a correction process for accuracy errors at this time in order not to further increase the complexity of the logic. Instead, I have implemented a mechanism whereby the user visually identifies which operation's channel transpositions cause problems, and the user changes the behavior of the tool himself.

However, I cannot overlook the situation where accuracy errors remain despite successful model transformations, and I hope to eradicate them in the future. The work to change the behavior of the tool by the users themselves with the users' visibility of the problem areas is very costly.

Since the internal processing mechanism needs to be significantly revised, we intend to implement many additional test modifications gradually in minor version upgrades to the extent that they do not affect the existing processing.

Idea

  • Add all OPs of ONNX to the output OP of the graph.
  • Run ONNX inference with dummy tensor only once to get the sample inference results for all OPs.
  • Always compare the contents of the ONNX output tensor and TensorFlow output tensor before converting each OP.
  • Channel placement will always be inconsistent between ONNX and TensorFlow, so the following implementation can be called from all OPs at any time.
    def onnx_tf_tensor_validation(
    *,
    onnx_tensor_infos: Dict[str, np.ndarray],
    tf_tensor_infos: Dict[str, np.ndarray],
    rtol: float=1e-05,
    atol: float=1e-05,
    ) -> Dict[str, List]:
    """Check if the ONNX tensor and the TF tensor are approximate.
    Parameters
    ----------
    onnx_tensor_infos: Dict[str, np.ndarray]
    ONNX tensor to be verified
    {
    output_name: np.ndarray,
    output_name: np.ndarray,
    :
    }
    tf_tensors: List[np.ndarray]
    TF tensor to be verified
    [
    np.ndarray,
    np.ndarray,
    :
    ]
    rtol: float=1e-05
    The relative tolerance parameter
    atol: float=1e-05
    The absolute tolerance parameter
    Returns
    ----------
    check_results: Dict[str, List[np.ndarray, int]]
    Tensor Comparison Results
    {
    onnx_output_name: [
    onnx_tensor,
    matched_flg, <--- 0: Unmatched, 1: Matched, 2: Skipped (Deleted or Shape Unmatched),
    max_abs_err,
    ]
    }
    """
    check_results = {
    onnx_output_name: [onnx_tensor, False, 0.0] \
    for onnx_output_name, onnx_tensor in onnx_tensor_infos.items()
    }
    for onnx_output_name, onnx_check_info in check_results.items():
    onnx_tensor: np.ndarray = onnx_check_info[0] # onnx_tensor
    tf_tensor: np.ndarray = tf_tensor_infos[onnx_output_name] # tf_tensor
    onnx_tensor_shape = onnx_tensor.shape
    max_abs_err = ONNX_INF_INDEX_VALUE
    """
    onnx_dummy_data: np.random.random_sample([1,3,224,224])
    tf_dummy_data : onnx_dummy_data.transpose([0,2,3,1]), len(tf_tensor.shape) == 4
    tf_shape_transpose_perms:
    [
    (0, 1, 2, 3), (0, 1, 3, 2), (0, 2, 1, 3), (0, 2, 3, 1), (0, 3, 1, 2),
    (0, 3, 2, 1), (1, 0, 2, 3), (1, 0, 3, 2), (1, 2, 0, 3), (1, 2, 3, 0),
    (1, 3, 0, 2), (1, 3, 2, 0), (2, 0, 1, 3), (2, 0, 3, 1), (2, 1, 0, 3),
    (2, 1, 3, 0), (2, 3, 0, 1), (2, 3, 1, 0), (3, 0, 1, 2), (3, 0, 2, 1),
    (3, 1, 0, 2), (3, 1, 2, 0), (3, 2, 0, 1), (3, 2, 1, 0)
    ]
    tf_target_transpose_perms:
    [(0, 3, 1, 2), (0, 3, 2, 1)]
    """
    tf_shape_transpose_perms = list(itertools.permutations(range(len(tf_tensor.shape))))
    tf_target_transpose_perms = [
    tf_shape_transpose_perm \
    for tf_shape_transpose_perm in tf_shape_transpose_perms \
    if tf_tensor.transpose(tf_shape_transpose_perm).shape == onnx_tensor_shape
    ]
    # Validation
    """
    tf_check_infos:
    {
    [
    tf_target_transpose_perm, <--- tf_target_transpose_perms[idx]
    matched_flg, <--- True: Matched, False: Unmatched
    ]
    }
    """
    validate_result = False
    tf_check_infos = [
    [tf_target_transpose_perm, 0] for tf_target_transpose_perm in tf_target_transpose_perms
    ]
    for tf_check_info in tf_check_infos:
    if len(onnx_tensor_shape) > 1:
    tf_transposed_tensor = tf_tensor.transpose(tf_check_info[0])
    if np.allclose(a=onnx_tensor, b=tf_transposed_tensor, rtol=rtol, atol=atol, equal_nan=True):
    # Matched
    tf_check_info[1] = 1
    max_abs_err = 0.0
    break
    else:
    # Unmatched
    if onnx_tensor.shape == tf_transposed_tensor.shape:
    error_value = np.max(np.abs(onnx_tensor - tf_transposed_tensor))
    max_abs_err = error_value if error_value < max_abs_err else max_abs_err
    else:
    tf_check_info[1] = 2
    max_abs_err = 0.0
    # Validation results check
    for tf_check_info in tf_check_infos:
    if tf_check_info[1]:
    validate_result = tf_check_info[1]
    break
    if not validate_result and max_abs_err == ONNX_INF_INDEX_VALUE:
    # Tensors deleted from the TensorFlow model structure during
    # the model optimization process are not comparable,
    # so the status is rewritten to Skip.
    # If there was no match between ONNX and TensorFlow output shapes.
    check_results[onnx_output_name][1] = 2
    check_results[onnx_output_name][2] = max_abs_err
    else:
    check_results[onnx_output_name][1] = validate_result
    check_results[onnx_output_name][2] = max_abs_err
    return check_results
  • Search for the channel arrangement with the lowest accuracy error.
  • The Transpose OP is excluded from the search because the value to be compared does not change.
  • Not implemented for Constant and ConstantOfShape.
  • Other than the above, OPs that do not involve conversion of values are excluded from processing.
  • For error correction, the same method as for pre_process_transpose_perm is used to check all combinations that pre-transpose the input tensor to the corresponding OP.
  • If the target OP is specified in parameter_replacement.json, omit processing.
  • Remove any intrinsic processing already implemented in MatMul.
  • OPs with both attribute and tensor values, such as Softmax, attempt correction in the following order: attribute value change first, then tensor transposition.

[YoloX] output differs between onnx and tflite

Issue Type

Others

onnx2tf version number

1.4.2

onnx version number

1.12.0

tensorflow version number

2.10

Download URL for ONNX

https://drive.google.com/file/d/1QQLf83JwLI_EoQFU5xOGKUvtFPsFSWCN/view

Parameter Replacement JSON

N/A

Description

  1. Purpose: Personal development
  2. What: The outputs are different between onnx and converted tflite. the image below is tflite and onnx outputs consecutively.

  1. How: It looks line number 849-855 caused this error in common_functions.py. In this case, intermediate Sub layer returned wrong output because it didn't re-swapped the parameters. But I didn't fully understand why these lines are needed. What is the bug you mentioned in PR #99?

# Skip subsequent processing in the following patterns.
# const_or_var_1: [1,1,5000]
# const_or_var_2: [5000]
if len(const_or_var_1.shape) >= 1 \
and len(const_or_var_2.shape) == 1 \
and const_or_var_1.shape[-1] == const_or_var_2.shape[-1]:
return const_or_var_1, const_or_var_2

[MODNet] Dimensions must be equal, but are 15 and 1280 for '{{node tf.math.multiply_17/Mul}} = Mul[T=DT_FLOAT](Placeholder, Placeholder_1)' with input shapes: [1,15,20,1280], [1,1280,15,20]

Issue Type

Others

onnx2tf version number

1.1.46

Download URL for ONNX

https://github.com/DeepranjanG/Image_Background_Removal
https://github.com/DeepranjanG/Image_Background_Removal/raw/main/pretrained/modnet.onnx

Parameter Replacement JSON

{
  "format_version": 1,
  "operations": [
    {
      "op_name": "Expand_187",
      "param_target": "outputs",
      "param_name": "685",
      "post_process_transpose_perm": [0,2,3,1]
    }
  ]
}

Description

1.Personal development
2.Error occured.

onnx2tf -i modnet.onnx -ois "input:1,3,480,640" -prf replace.json
    Dimensions must be equal, but are 15 and 1280 for '{{node tf.math.multiply_17/Mul}} = Mul[T=DT_FLOAT](Placeholder, Placeholder_1)' with input shapes: [1,15,20,1280], [1,1280,15,20]

3.replace.json
5.[Experimental] Automatic transposition of input values (valid only if immediately preceding is "Conv") #36

When Onnx Matmul inputs have different dimension

Issue Type

Others

onnx2tf version number

1.5.18

onnx version number

1.13.0

tensorflow version number

2.10.0

Download URL for ONNX

test_model.zip

Parameter Replacement JSON

Not used

Description

  1. Purpose: Currently getting help from this library for research project of anomaly detection task. Thank you for building this library.
  2. What: Two inputs A, B of onnx Matmul ops are not equal. Error log is attached. error_log.txt
  3. How: I tried fixing it using param_replacement.json. But For Matmul ops, only axis transposing is suported, so I am not able to add dimension on the input.
    image
INFO:  input_name.1: input.35 shape: [1, 256, 66] dtype: float32
INFO:  input_name.2: onnx::MatMul_73 shape: [66, 2] dtype: <class 'numpy.float32'>

We are thinking if we can add one more dimension in input_name.2: onnx::MatMul_73 to [1, 66, 2], it can be resolved.

  1. Why: This onnx model is converted from torch module and using Autoencoder structure. It is important for us to keep current network structure. So we are hoping to be able to fix this dimension unequality issue.

[japan_PP-OCRv3] For min_value and max_value in Clip.py

Issue Type

Others

onnx2tf version number

1.1.17

Download URL for ONNX

japan_PP-OCRv3_rec_infer.zip

Parameter Replacement JSON

None

Description

onnx2tf -i japan_PP-OCRv3_rec_infer.onnx

ERROR: The trace log is below.
Traceback (most recent call last):
  File "/home/systemkjapan/.local/lib/python3.8/site-packages/onnx2tf/utils/common_functions.py", line 261, in print_wrapper_func
    result = func(*args, **kwargs)
  File "/home/systemkjapan/.local/lib/python3.8/site-packages/onnx2tf/utils/common_functions.py", line 323, in inverted_operation_enable_disable_wrapper_func
    result = func(*args, **kwargs)
  File "/home/systemkjapan/.local/lib/python3.8/site-packages/onnx2tf/ops/Clip.py", line 109, in make_node
    if (min_value is not None and min_value.shape is not None) \
AttributeError: 'float' object has no attribute 'shape'

I don't think the code here will work if 'min_value' is 'float'?
Or is it wrong to be 'float'?

[MobileBERT] Add `--optimization_for_gpu_delegate` option

Issue Type

Others

onnx2tf version number

1.5.24

onnx version number

1.13.0

tensorflow version number

2.10.0

Download URL for ONNX

  1. [MobileBERT] RecursionError: maximum recursion depth exceeded #130
  2. https://github.com/PINTO0309/onnx2tf/releases/download/1.1.28/fastestdet.onnx

Parameter Replacement JSON

  1. https://github.com/PINTO0309/onnx2tf/raw/main/json_samples/replace_MobileBERT.json

Description

Question about channel_transpose in common_functions.py

Issue Type

Others

onnx2tf version number

1.1.25

Download URL for ONNX

gist for reproduction : https://colab.research.google.com/gist/Hyunseok-Kim0/d0aaf6e9ac6fbe461c5f2364db4bc0b2/onnx2tf_20221117.ipynb

Parameter Replacement JSON

N/A

Description

  1. Purpose: Personal development
  2. What: channel_transpose in common_functions.py is used in arithmetic operations like Add, Sub, Mul, etc. What is the main purpose of this function? When second input has more dimension, channel_transpose adds additional squeeze layer and changes the output shape, not vice versa.
    Please see the gist (https://colab.research.google.com/gist/Hyunseok-Kim0/d0aaf6e9ac6fbe461c5f2364db4bc0b2/onnx2tf_20221117.ipynb). When the output of network is x = x + y, onnx and tflite has same output shape. Converted tflite has wrong shape for x = y + x.
ONNX Correct tflite (x = x + y) Wrong tflite (x = y + x)
Screenshot from 2022-11-17 11-07-34 Screenshot from 2022-11-17 11-26-23 Screenshot from 2022-11-17 11-11-00

Avoid creating FlexTranspose for DepthToSpace

Issue Type

Feature Request

onnx2tf version number

1.3.14

onnx version number

1.12.0

tensorflow version number

2.10.0

Download URL for ONNX

https://drive.google.com/file/d/1XeM7XpjOsL37qAdpWa1j6x9KMzYbiHgW/view?usp=sharing

Parameter Replacement JSON

{}

Description

  1. Personal development
  2. Pixelshuffle layer in PyTorch (CRD mode for ONNX) produces FlexTranspose op when converting to TF.
    Screenshot 2023-01-04 121313
  3. I tried implementing a workaround as in #62, but that relies on squeezing dimensions of size 1. These are not necessarily present for PixelShuffle layers.
  4. The presence of FlexTranspose operations requires TFLite Flex delegates, which is not always possible when deploying.

Weird bug in RoiAlign

Issue Type

Others

onnx2tf version number

1.1.33

Download URL for ONNX

Please use code below to reproduce bug.

import torch
from einops import rearrange
from torch import nn
import numpy as np
from onnx2tf import convert
import shutil
import torchvision
import tensorflow as tf
import cv2
import matplotlib.pyplot as plt
import requests
import onnxruntime


class dummy_network(nn.Module):

    def __init__(self, output_size, spatial_scale):
        super(dummy_network, self).__init__()

        self.output_size = output_size
        self.spatial_scale = spatial_scale

    def forward(self, x, roi):
        x = torchvision.ops.roi_align(x, boxes=roi, output_size=self.output_size, spatial_scale=self.spatial_scale)

        return x


def visualize(outputs):
    plt.figure(figsize=(8, 8))

    col = 4
    row = len(outputs) // 4 + 1

    for i, o in enumerate(outputs, start=1):
        plt.subplot(row, col, i)
        plt.axis("off")
        plt.imshow(o.astype(int))

    plt.show()


def main():
    model = dummy_network(output_size=(64, 64), spatial_scale=1.0)
    model.eval()

    dummy_name = "dummy_roi_align"
    onnx_save_path = f"tflite/{dummy_name}.onnx"
    temp_tflite = "tflite/model_float32.tflite"
    tflite_save_path = f"tflite/{dummy_name}.tflite"

    url = "https://static01.nyt.com/images/2021/09/14/science/07CAT-STRIPES/07CAT-STRIPES-mediumSquareAt3X-v2.jpg"
    image_nparray = np.asarray(bytearray(requests.get(url).content), dtype=np.uint8)
    image = cv2.imdecode(image_nparray, cv2.IMREAD_COLOR)

    dummy_input_x = np.expand_dims(cv2.resize(cv2.cvtColor(image, cv2.COLOR_BGR2RGB), (256, 256)), axis=0)
    dummy_input_x = rearrange(torch.Tensor(dummy_input_x), "n h w c -> n c h w")
    # dummy_input_roi = [[0, 0, 0, 0.5, 0.5], [0, 0.5, 0, 1, 0.5], [0, 0, 0.5, 0.5, 1], [0, 0.5, 0.5, 1, 1]]
    dummy_input_roi = [[0, 0, 0, 128, 128], [0, 128, 0, 256, 128], [0, 0, 128, 128, 256], [0, 128, 128, 256, 256]]
    # dummy_input_roi = [[0, 0, 1, 2, 3]]
    # dummy_input_roi = [[0, 0, 0, 256, 256]]
    dummy_input_roi = torch.Tensor(dummy_input_roi)

    torch.onnx.export(model,
                      args=(dummy_input_x, dummy_input_roi),
                      f=onnx_save_path,
                      input_names=["x", "roi"],
                      opset_version=11)

    convert(onnx_save_path, output_folder_path="tflite")
    shutil.move(temp_tflite, tflite_save_path)


    # get torch output
    # -----------------------------------------------------------------------------------------------
    with torch.no_grad():
        torch_output = model(dummy_input_x, dummy_input_roi)
        torch_output = rearrange(torch_output, "n c h w -> n h w c")

    # visualize(torch_output)

    # get onnx output
    # -----------------------------------------------------------------------------------------------
    onnx_session = onnxruntime.InferenceSession(onnx_save_path, providers=['CPUExecutionProvider'])
    onnx_inputs = dict(x=dummy_input_x.numpy(), roi=dummy_input_roi.numpy())
    onnx_output = onnx_session.run(None, onnx_inputs)[0]
    onnx_output = rearrange(onnx_output, "n c h w -> n h w c")

    # compare torch output and onnx output
    np.testing.assert_allclose(onnx_output, torch_output.numpy())

    # get tflite output
    # -----------------------------------------------------------------------------------------------
    tflite_onnx2tf = tf.lite.Interpreter(model_path=tflite_save_path)
    tflite_onnx2tf.allocate_tensors()
    tflite_onnx2tf.set_tensor(tflite_onnx2tf.get_input_details()[0]['index'],
                              rearrange(dummy_input_x.numpy(), "n c h w -> n h w c"))
    tflite_onnx2tf.set_tensor(tflite_onnx2tf.get_input_details()[1]['index'], dummy_input_roi.numpy())
    tflite_onnx2tf.invoke()
    tflite_output = [tflite_onnx2tf.get_tensor(i['index']) for i in tflite_onnx2tf.get_output_details()][0]

    total_output = np.concatenate([torch_output, onnx_output, tflite_output], axis=0)
    visualize(total_output)

    print("convert done")

    return


if __name__ == "__main__":
    main()

Parameter Replacement JSON

N/A

Description

  1. Purpose: Personal development

  2. What: When I tested RoiAlign, it showed unexpected output as below.

pytorch output
onnx output
wrong tflite output
correct tflite output after modification
  1. How: After some debugging, I found that wrong order of roi coordinates are fed into tflite. After changing line 124 to x0, x1, y1, y0 = tf.split(boxes, 4, axis=1), I could get correct result as shown above.
    def transform_fpcoor_for_tf(
    *,
    boxes,
    image_shape,
    crop_size,
    sampling_ratio,
    adaptive_ratio,
    ):
    x0, y0, x1, y1 = tf.split(boxes, 4, axis=1)
    if not adaptive_ratio:
    crop_shape = (

    Although the output value is slightly different as below due to the implementation detail in tensorflow, it looks working fine. However, I couldn't find out why order of the roi coordinates is changed from x0, y0, x1, y1 to x0, x1, y1, y0.

[human_segmentation_pphumanseg_2021oct] Failed to run test sample. The inequality of unknown TensorShapes is undefined

Issue Type

Others

onnx2tf version number

1.2.13

Download URL for ONNX

wget https://github.com/PINTO0309/onnx2tf/releases/download/1.1.27/human_segmentation_pphumanseg_2021oct.onnx

Parameter Replacement JSON

{
  "format_version": 1,
  "operations": [
    {
      "op_name": "StatefulPartitionedCall/Tile_4",
      "param_target": "inputs", # attributes or inputs
      "param_name": "const_fold_opt__677",
      "values": [1,1,17] # Disable parameter transposition or overwrite parameters
    },
    {
      "op_name": "StatefulPartitionedCall/Cast_3",
      "param_target": "attributes", # attributes or inputs
      "param_name": "to",
      "values": 1 # Disable parameter transposition or overwrite "to" parameters
    },
    {
      "op_name": "Resize__697",
      "param_target": "inputs",
      "param_name": "Concat__696:0",
      "values": [26,26] # Replacement of unk__x (Resize OP, sizes height/width parameter)
    },
    {
      "op_name": "Transpose__927",
      "param_target": "attributes",
      "param_name": "perm",
      "values": [0,1,2,3] # Disable parameter transposition or overwrite "perm" parameters
    },
    {
      "op_name": "StatefulPartitionedCall/functional_1/max_unpooling2d_2/Reshape_1",
      "param_target": "inputs",
      "param_name": "const_fold_opt__911",
      "values": [4,131072] # Overwrite "shape" parameters
    },
    {
      "op_name": "Reshape_25",
      "param_target": "outputs",
      "param_name": "onnx::InstanceNormalization_270",
      "post_process_transpose_perm": [0,2,1] # Extrapolate 3D Transpose after Reshape
    },
    {
      "op_name": "Reshape_30",
      "param_target": "outputs",
      "param_name": "onnx::Mul_275",
      "post_process_transpose_perm": [0,2,3,1] # Extrapolate 4D Transpose after Reshape
    },
    {
      "op_name": "flatten_1127",
      "param_target": "inputs",
      "param_name": "dropout0",
      "pre_process_transpose_perm": [0,3,1,2]
    }
  ]
}

Description

Failed to run test sample in CLI. Use this:

wget https://github.com/PINTO0309/onnx2tf/releases/download/1.1.27/human_segmentation_pphumanseg_2021oct.onnx
wget https://github.com/PINTO0309/onnx2tf/releases/download/1.1.27/replace.json
onnx2tf -i human_segmentation_pphumanseg_2021oct.onnx -prf replace.json

Encountered:

Model optimizing started ============================================================
WARNING: Failed to optimize the onnx file.
Traceback (most recent call last):
  File "C:\Users\zhugc\AppData\Roaming\Python\Python39\site-packages\onnx2tf\onnx2tf.py", line 427, in convert
    result = subprocess.check_output(
  File "D:\Anaconda3\lib\subprocess.py", line 424, in check_output
    return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
  File "D:\Anaconda3\lib\subprocess.py", line 528, in run
    raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['onnxsim', 'human_segmentation_pphumanseg_2021oct.onnx', 'human_segmentation_pphumanseg_2021oct.onnx']' returned non-zero exit status 1.

Automatic generation of each OP name started ========================================
Automatic generation of each OP name complete!

Model loaded ========================================================================

Model convertion started ============================================================
INFO: input_op_name: x shape: [1, 3, 192, 192] dtype: float32
ERROR: The trace log is below.
Traceback (most recent call last):
  File "C:\Users\zhugc\AppData\Roaming\Python\Python39\site-packages\onnx2tf\utils\common_functions.py", line 262, in print_wrapper_func
    result = func(*args, **kwargs)
  File "C:\Users\zhugc\AppData\Roaming\Python\Python39\site-packages\onnx2tf\ops\Input.py", line 81, in make_node
    if graph_input.shape != tf.TensorShape(None) and len(graph_input.shape) in [3, 4, 5] \
  File "C:\Users\zhugc\AppData\Roaming\Python\Python39\site-packages\tensorflow\python\framework\tensor_shape.py", line 1294, in __ne__
    raise ValueError("The inequality of unknown TensorShapes is undefined.")
ValueError: The inequality of unknown TensorShapes is undefined.
ERROR: Read this and deal with it. https://github.com/PINTO0309/onnx2tf#parameter-replacement
ERROR: Alternatively, if the input OP has a dynamic dimension, use the -b or -ois option to rewrite it to a static shape and try again.
ERROR: If the input OP of ONNX before conversion is NHWC, use the -kt option.

[E2Pose] Convert error

Issue Type

Others

onnx2tf version number

1.2.2

Download URL for ONNX

https://github.com/AISIN-TRC/E2Pose/blob/main/pretrains/download.sh
https://github.com/PINTO0309/PINTO_model_zoo/tree/main/333_E2Pose

Parameter Replacement JSON

{
  "format_version": 1,
  "operations": [
    {
      "op_name": "e2pose/E2Pose_Inference/E2Pose_pose_stage00_decode_01_norm/FusedBatchNormV3__930",
      "param_target": "attributes",
      "param_name": "perm",
      "values": [0,1,2,3]
    },
    {
      "op_name": "e2pose/E2Pose_Inference/E2Pose_pose_stage00_decode_02_norm/FusedBatchNormV3__1055",
      "param_target": "attributes",
      "param_name": "perm",
      "values": [0,1,2,3]
    },
    {
      "op_name": "e2pose/E2Pose_Inference/E2Pose_pose_stage00_decode_03_norm/FusedBatchNormV3__1180",
      "param_target": "attributes",
      "param_name": "perm",
      "values": [0,1,2,3]
    },
    {
      "op_name": "e2pose/E2Pose_Inference/E2Pose_pose_stage00_decode_04_norm/FusedBatchNormV3__1305",
      "param_target": "attributes",
      "param_name": "perm",
      "values": [0,1,2,3]
    },
    {
      "op_name": "e2pose/E2Pose_Inference/E2Pose_pose_stage00_decode_02_p_seed/Conv2D__1065",
      "param_target": "attributes",
      "param_name": "perm",
      "values": [0,1,2,3]
    },
    {
      "op_name": "e2pose/E2Pose_Inference/E2Pose_pose_stage00_decode_03_p_seed/Conv2D__1190",
      "param_target": "attributes",
      "param_name": "perm",
      "values": [0,1,2,3]
    },
    {
      "op_name": "e2pose/E2Pose_Inference/E2Pose_pose_stage00_decode_04_p_seed/Conv2D__1315",
      "param_target": "attributes",
      "param_name": "perm",
      "values": [0,1,2,3]
    },
    {
      "op_name": "e2pose/E2Pose_Inference/E2Pose_pose_stage00_decode_05_p_seed/Conv2D__1408",
      "param_target": "attributes",
      "param_name": "perm",
      "values": [0,1,2,3]
    },
    {
      "op_name": "e2pose/E2Pose_Inference/E2Pose_pose_stage00_output_04_cyx_seed_split/split",
      "param_target": "attributes",
      "param_name": "axis",
      "values": 3
    },
    {
      "op_name": "e2pose/E2Pose_Inference/E2Pose_pose_stage00_output_03_cyx_seed_split/split",
      "param_target": "attributes",
      "param_name": "axis",
      "values": 3
    },
    {
      "op_name": "e2pose/E2Pose_Inference/E2Pose_pose_stage00_output_02_cyx_seed_split/split",
      "param_target": "attributes",
      "param_name": "axis",
      "values": 3
    },
    {
      "op_name": "e2pose/E2Pose_Inference/E2Pose_pose_stage00_output_01_cyx_seed_split/split",
      "param_target": "attributes",
      "param_name": "axis",
      "values": 3
    },
    {
      "op_name": "e2pose/E2Pose_Inference/E2Pose_pose_stage00_output_05_cyx_seed_split/split",
      "param_target": "attributes",
      "param_name": "axis",
      "values": 3
    },
    {
      "op_name": "e2pose/E2Pose_Inference/E2Pose_pose_stage00_output_04_cyx/concat",
      "param_target": "attributes",
      "param_name": "axis",
      "values": 3
    },
    {
      "op_name": "e2pose/E2Pose_Inference/E2Pose_pose_stage00_output_03_cyx/concat",
      "param_target": "attributes",
      "param_name": "axis",
      "values": 3
    },
    {
      "op_name": "e2pose/E2Pose_Inference/E2Pose_pose_stage00_output_02_cyx/concat",
      "param_target": "attributes",
      "param_name": "axis",
      "values": 3
    },
    {
      "op_name": "e2pose/E2Pose_Inference/E2Pose_pose_stage00_output_01_cyx/concat",
      "param_target": "attributes",
      "param_name": "axis",
      "values": 3
    },
    {
      "op_name": "e2pose/E2Pose_Inference/E2Pose_pose_stage00_output_05_cyx/concat",
      "param_target": "attributes",
      "param_name": "axis",
      "values": 3
    },
    {
      "op_name": "e2pose/E2Pose_Inference/E2Pose_pose_stage00_output_04_cyx_reshape/Reshape",
      "param_target": "inputs",
      "param_name": "e2pose/E2Pose_Inference/E2Pose_pose_stage00_output_04_cyx/concat:0",
      "pre_process_transpose_perm": [0,2,3,1]
    }
  ]
}

Description

  1. Personal development
  2. Convert error occured
  3. replacement.json
  4. Because I want tflite.
  5. https://github.com/PINTO0309/onnx2tf/releases/tag/1.2.3

[vae_encoder] Error when converting vae_encoder.onnx

Issue Type

Others

onnx2tf version number

1.1.15

Download URL for ONNX

https://github.com/PINTO0309/onnx2tf/releases/download/1.1.15/vae_encoder.onnx

Parameter Replacement JSON

{
  "format_version": 1,
  "operations": [
    {
      "op_name": "Reshape_448",
      "param_target": "outputs",
      "param_name": "onnx::Add_817",
      "post_process_transpose_perm": [0,2,3,1]
    }
  ]
}

Description

  1. Error when converting vae_encoder.onnx. For StableDiffusion [Hugging Face].
  • Error message
    onnx2tf -i vae_encoder.onnx
    ERROR: The trace log is below.
    Traceback (most recent call last):
      File "/home/xxxxx/git/onnx2tf/onnx2tf/utils/common_functions.py", line 261, in print_wrapper_func
        result = func(*args, **kwargs)
      File "/home/xxxxx/git/onnx2tf/onnx2tf/utils/common_functions.py", line 323, in inverted_operation_enable_disable_wrapper_func
        result = func(*args, **kwargs)
      File "/home/xxxxx/git/onnx2tf/onnx2tf/ops/Add.py", line 79, in make_node
        tf.math.add(
      File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py", line 153, in error_handler
        raise e.with_traceback(filtered_tb) from None
      File "/usr/local/lib/python3.8/dist-packages/keras/layers/core/tf_op_layer.py", line 119, in handle
        return TFOpLambda(op)(*args, **kwargs)
      File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 70, in error_handler
        raise e.with_traceback(filtered_tb) from None
    ValueError: Exception encountered when calling layer "tf.math.add_56" (type TFOpLambda).
    
    Dimensions must be equal, but are 512 and 64 for '{{node tf.math.add_56/Add}} = AddV2[T=DT_FLOAT](Placeholder, Placeholder_1)' with input shapes: [1,512,64,64], [1,64,64,512].
    
    Call arguments received by layer "tf.math.add_56" (type TFOpLambda):
      • x=tf.Tensor(shape=(1, 512, 64, 64), dtype=float32)
      • y=tf.Tensor(shape=(1, 64, 64, 512), dtype=float32)
      • name='Add_449'
    ERROR: Read this and deal with it. https://github.com/PINTO0309/onnx2tf#parameter-replacement
    ERROR: Alternatively, if the input OP has a dynamic dimension, use the -b or -ois option to rewrite it to a static shape and try again.
    ERROR: If the input OP of ONNX before conversion is NHWC, use the -kt option.
    
  1. To avoid conversion errors, write replace.json and run it.
    onnx2tf -i vae_encoder.onnx -prf replace.json

The final conversion was successful. https://github.com/PINTO0309/onnx2tf/releases/tag/1.1.15

Bug in TFlite NonMaxSuppression

Issue Type

Others

onnx2tf version number

1.2.9

Download URL for ONNX

Any onnx file with NonMaxSuppression can be used. Please use code below for simple debugging.

import os
import shutil

import onnxruntime
import tensorflow as tf
import torch
import torchvision
from torch import nn

from onnx2tf import convert


class dummy_network(nn.Module):

    def __init__(self):
        super(dummy_network, self).__init__()

    def forward(self, boxes, scores):
        # unique with sort
        result = torchvision.ops.nms(boxes, scores, iou_threshold=0.3)

        return result


def main():
    os.makedirs("tflite", exist_ok=True)
    model = dummy_network()
    model.eval()

    dummy_name = "dummy_nms"
    onnx_save_path = f"tflite/{dummy_name}.onnx"
    temp_tflite = "tflite/model_float32.tflite"
    tflite_save_path = f"tflite/{dummy_name}.tflite"

    dummy_input_boxes = torch.abs(torch.randn(32, 4, dtype=torch.float32))
    dummy_input_boxes = torch.concat([dummy_input_boxes, dummy_input_boxes], dim=0)
    dummy_scores = torch.abs(torch.randn(32, dtype=torch.float32))
    dummy_scores = torch.concat([dummy_scores, dummy_scores], dim=0)

    torch.onnx.export(model,
                      args=(dummy_input_boxes, dummy_scores),
                      f=onnx_save_path,
                      input_names=["boxes", "scores"],
                      output_names=["nms_result"],
                      opset_version=11)

    # get onnx output
    # -----------------------------------------------------------------------------------------------
    onnx_session = onnxruntime.InferenceSession(onnx_save_path, providers=['CPUExecutionProvider'])
    onnx_inputs = dict(boxes=dummy_input_boxes.numpy(), scores=dummy_scores.numpy())
    onnx_outputs = onnx_session.run(None, onnx_inputs)

    convert(onnx_save_path, output_folder_path="tflite")
    shutil.move(temp_tflite, tflite_save_path)

    # # compare torch output and onnx output
    # np.testing.assert_allclose(onnx_output, torch_output.numpy())
    #
    # get tflite output
    # -----------------------------------------------------------------------------------------------
    tflite_onnx2tf = tf.lite.Interpreter(model_path=tflite_save_path)
    tflite_onnx2tf.allocate_tensors()
    tflite_onnx2tf.set_tensor(tflite_onnx2tf.get_input_details()[0]['index'], dummy_input_boxes.numpy())
    tflite_onnx2tf.set_tensor(tflite_onnx2tf.get_input_details()[1]['index'], dummy_scores.numpy())
    tflite_onnx2tf.invoke()
    tflite_output = [tflite_onnx2tf.get_tensor(i['index']) for i in tflite_onnx2tf.get_output_details()][0]

    print("convert done")

    return


if __name__ == "__main__":
    main()

Parameter Replacement JSON

N/A

Description

  1. Purpose: Personal development

  2. What: onnx NonMaxSuppression operator is converted using tf.image.non_max_suppression. However, it looks there is bug in tensorflow operation.
    tensorflow/tensorflow#51629
    For now, tf.image.non_max_suppression always pad the number of output to max_output_boxes_per_class. The result below shows the difference between onnx and tflite output.
    Screenshot from 2022-12-15 15-02-08
    There is another operation, tf.image.non_max_suppression_padded which using extra parameter pad_to_max_output_size. I thought this problem can be solved by setting pad_to_max_output_size to False, but it also didn't work.
    Screenshot from 2022-12-15 15-01-28
    Returned indices are not correct, looks like the implementation is totally different between tf.image.non_max_suppression_padded and tf.image.non_max_suppression.

  3. How: This problem can be solved by adding single line of code selected_indices = tf.unique(selected_indices).y after line number 208. However, sometimes last element of indices can be wrong value since this solution is just removing duplicated indices.

    selected_indices = tf.image.non_max_suppression(
    tf_boxes,
    tf_scores,
    max_output_boxes_per_class,
    iou_threshold,
    score_threshold,
    )

[Yolov7] Yolov7 problem with detection coordinates

Issue Type

Others

onnx2tf version number

1.1.38

Download URL for ONNX

https://github.com/triptec/detection-models/blob/master/yolov7/yolov7-tiny.onnx

Parameter Replacement JSON

{}

Description

  1. Personal project
  2. I'm trying to convert a yolov7-tiny.onnx to a tflite int8 model.
    First I export to onnx with:
    python export.py --weights ../models/yolov7/yolov7-tiny.pt --grid --simplify
    Then convert it with:
    onnx2tf -i yolov7-tiny.onnx -ioqd uint8 -oiqt
    Finaly I run (using https://github.com/ultralytics/yolov5/blob/master/detect.py)
    python detect.py --weights ../models/yolov7/yolov7-tiny.tflite --conf 0.25 --img-size 640 --source data/images/zidane.jpg

And it results in this:

tensor([2.39083e+05, 1.02942e+05, 3.69536e+05, 3.17344e+05, 7.92245e-01, 0.00000e+00])
tensor([1.40084e+05, 2.27828e+05, 1.72367e+05, 3.18328e+05, 5.62911e-01, 2.70000e+01])
tensor([4.15838e+04, 1.53934e+05, 3.06103e+05, 3.16576e+05, 5.42062e-01, 0.00000e+00])
[tensor(1280.), tensor(720.), tensor(1280.), tensor(720.)] tensor(0.54206) tensor(0.)
[tensor(1280.), tensor(720.), tensor(1280.), tensor(720.)] tensor(0.56291) tensor(27.)
[tensor(1280.), tensor(720.), tensor(1280.), tensor(720.)] tensor(0.79225) tensor(0.)
image 1/1 /home/andreas/Development/priv/cat-detection/yolov5/data/images/zidane.jpg: 640x640 2 persons, 1 tie, 1561.9ms
Speed: 2.7ms pre-process, 1561.9ms inference, 1.6ms NMS per image at shape (1, 3, 640, 640)
Results saved to runs/detect/exp38

As you can see it finds 2 people and 1 tie but the coordinates are not correct.

Here's the correct result with the onnx model:
zidane

In the readme it's stated that YOLOv7-tiny with Post-Process (NMS) ONNX to TFLite Float32 has been converted. Do you have the command and a "detection.py" what works with the result? Perhaps if I could get a hold of that I could solve this myself?

  1. I've tried different flags when converting but I really don't know if this is due to the model not beiung compatible with int8.
  2. I'm trying to convert a yolov7 model to tflite to be able to run inference on something called frigate and I've already written the infra to run yolov5 tflite models and would like to use the same code for yolov7, or with some changes but I need to figure out what to change.
  3. none

Problem with the docker image

Issue Type

Others

onnx2tf version number

1.1.38

Download URL for ONNX

none

Parameter Replacement JSON

none

Description

  1. Personal dev
  2. While trying to run the docker image with docker run --rm -it -v `pwd`:/workdir -w /workdir ghcr.io/pinto0309/onnx2tf:1.1.38 I get the following error:
Unable to find image 'ghcr.io/pinto0309/onnx2tf:1.1.38' locally
docker: Error response from daemon: Head "https://ghcr.io/v2/pinto0309/onnx2tf/manifests/1.1.38": unauthorized.
  1. I'm considering building it myself
  2. I just would like the documentation to be correct
  3. None

Also I wonder what would be the best way to then go from tensorflow to tflite?

Mult Operation failing if mul operation is after Unsqueeze operation citing dimension mismatch. The Input from Unsqueeze operation is not transformed when passed as input to Mult

Issue Type

Others

Download URL for ONNX

https://act-my.sharepoint.com/personal/WopiFrame.aspx?sourcedoc=%7B35bd9e06-60e0-471a-b50b-7743073d62b8%7D&action=default

Description

Log added below: One of the inputs. to Mul operation is transformed and the one from Unsqueeze is not transformed

INFO: tf_op_type: Unsqueeze input_name.1: tf.math.divide_1/truediv:0 shape: (1, 72) dtype: <dtype: 'float32'>
INFO: tf_op_type: Unsqueeze output_name.1: 466 shape: (1, 72, 1) dtype: <dtype: 'float32'>
starting to make node
<module 'onnx2tf.ops.Unsqueeze' from '~/onnx2tf/onnx2tf/ops/Unsqueeze.py'>

INFO: onnx_op_type: Unsqueeze onnx_op_name: Unsqueeze_49
INFO: input_name.1: 466 shape: [1, 72, 1] dtype: float32
INFO: output_name.1: 467 shape: [1, 72, 1, 1] dtype: float32
[3]
[1, 72, 1, 1]
[1, 72, 1, 1]
tf.reshape_1/Reshape:0
(1, 72, 1, 1)
True
INFO: tf_op_type: Unsqueeze input_name.1: tf.reshape/Reshape:0 shape: (1, 72, 1) dtype: <dtype: 'float32'>
INFO: tf_op_type: Unsqueeze output_name.1: 467 shape: (1, 72, 1, 1) dtype: <dtype: 'float32'>
starting to make node
<module 'onnx2tf.ops.Mul' from '~/onnx2tf/onnx2tf/ops/Mul.py'>

INFO: onnx_op_type: Mul onnx_op_name: Mul_50
INFO: input_name.1: 453 shape: [1, 72, 60, 60] dtype: float32
INFO: input_name.2: 467 shape: [1, 72, 1, 1] dtype: float32
INFO: output_name.1: 468 shape: [1, 72, 60, 60] dtype: float32
[1, 72, 60, 60]

Traceback (most recent call last):
File “/onnx2tf/onnx2tf/utils/common_functions.py", line 49, in print_wrapper_func
result = func(*args, **kwargs)
File "
/onnx2tf/onnx2tf/utils/common_functions.py", line 105, in inverted_operation_enable_disable_wrapper_func
result = func(*args, **kwargs)
File “/onnx2tf/onnx2tf/ops/Mul.py", line 61, in make_node
tf.math.multiply(
File "
/model-conversion/git/env/lib/python3.8/site-packages/tensorflow/python/util/traceback_utils.py", line 153, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/model-conversion/git/env/lib/python3.8/site-packages/keras/layers/core/tf_op_layer.py", line 119, in handle
return TFOpLambda(op)(*args, **kwargs)
File "
/model-conversion/git/env/lib/python3.8/site-packages/keras/utils/traceback_utils.py", line 70, in error_handler
raise e.with_traceback(filtered_tb) from None
ValueError: Exception encountered when calling layer "tf.math.multiply_3" (type TFOpLambda).

Dimensions must be equal, but are 60 and 72 for '{{node tf.math.multiply_3/Mul}} = Mul[T=DT_FLOAT](Placeholder, Placeholder_1)' with input shapes: [1,60,60,72], [1,72,1,1].

Call arguments received by layer "tf.math.multiply_3" (type TFOpLambda):
• x=tf.Tensor(shape=(1, 60, 60, 72), dtype=float32)
• y=tf.Tensor(shape=(1, 72, 1, 1), dtype=float32)
• name='Mul_50'

[MobileBERT] RecursionError: maximum recursion depth exceeded

Issue Type

Others

onnx2tf version number

1.5.16

onnx version number

1.13.0

tensorflow version number

2.10.0

Download URL for ONNX

  • tflite
    https://tfhub.dev/tensorflow/lite-model/mobilebert/1/metadata/1?lite-format=tflite

  • tflite to onnx

    python -m tf2onnx.convert \
    --opset 11 \
    --tflite lite-model_mobilebert_1_metadata_1.tflite \
    --output lite_model_mobilebert_1_metadata_1.onnx
    
    onnxsim lite_model_mobilebert_1_metadata_1.onnx lite_model_mobilebert_1_metadata_1.onnx
    onnxsim lite_model_mobilebert_1_metadata_1.onnx lite_model_mobilebert_1_metadata_1.onnx
    onnxsim lite_model_mobilebert_1_metadata_1.onnx lite_model_mobilebert_1_metadata_1.onnx
    onnxsim lite_model_mobilebert_1_metadata_1.onnx lite_model_mobilebert_1_metadata_1.onnx
    onnxsim lite_model_mobilebert_1_metadata_1.onnx lite_model_mobilebert_1_metadata_1.onnx
  • onnx to tflite

    onnx2tf -i lite_model_mobilebert_1_metadata_1.onnx -cotof -cotoa 1e-1 -cgdc
    

Parameter Replacement JSON

N/A

Description

The model structure was too complex and overflowed python's Maximum Recursion Depth.
The following must be added to the code to avoid this.

sys.setrecursionlimit(10000)
  1. Personal
  2. Error
    Traceback (most recent call last):
      File "/home/b920405/.local/bin/onnx2tf", line 33, in <module>
        sys.exit(load_entry_point('onnx2tf', 'console_scripts', 'onnx2tf')())
      File "/usr/local/lib/python3.8/dist-packages/onnx2tf/onnx2tf.py", line 1698, in main
        model = convert(
      File "/usr/local/lib/python3.8/dist-packages/onnx2tf/onnx2tf.py", line 718, in convert
        model = tf.keras.Model(inputs=inputs, outputs=outputs)
      File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/trackable/base.py", line 205, in _method_wrapper
        result = method(self, *args, **kwargs)
      File "/usr/local/lib/python3.8/dist-packages/keras/engine/functional.py", line 165, in __init__
        self._init_graph_network(inputs, outputs)
      File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/trackable/base.py", line 205, in _method_wrapper
        result = method(self, *args, **kwargs)
      File "/usr/local/lib/python3.8/dist-packages/keras/engine/functional.py", line 264, in _init_graph_network
        nodes, nodes_by_depth, layers, _ = _map_graph_network(
      File "/usr/local/lib/python3.8/dist-packages/keras/engine/functional.py", line 1046, in _map_graph_network
        nodes_in_decreasing_depth, layer_indices = _build_map(outputs)
      File "/usr/local/lib/python3.8/dist-packages/keras/engine/functional.py", line 1176, in _build_map
        _build_map_helper(
      File "/usr/local/lib/python3.8/dist-packages/keras/engine/functional.py", line 1219, in _build_map_helper
        _build_map_helper(
      File "/usr/local/lib/python3.8/dist-packages/keras/engine/functional.py", line 1219, in _build_map_helper
        _build_map_helper(
      File "/usr/local/lib/python3.8/dist-packages/keras/engine/functional.py", line 1219, in _build_map_helper
        _build_map_helper(
      [Previous line repeated 986 more times]
      File "/usr/local/lib/python3.8/dist-packages/keras/engine/functional.py", line 1199, in _build_map_helper
        node = layer._inbound_nodes[node_index]
    RecursionError: maximum recursion depth exceeded
    

Problem in padding in ConvTranspose when stride is larger than 1

Issue Type

Others

onnx2tf version number

1.2.14

onnx version number

1.12.0

tensorflow version number

2.10

Download URL for ONNX

Please use code below for debugging.

import os
import shutil

import onnxruntime
import tensorflow as tf
import torch
from einops import rearrange
from torch import nn
import numpy as np
from onnx2tf import convert


class dummy_network(nn.Module):

    def __init__(self):
        super(dummy_network, self).__init__()

        # segmentation fault with wrong padding mode
        # self.tconv = torch.nn.ConvTranspose2d(3, 64, (3, 3), stride=3, padding=1)     # case 1: wrong output
        # self.tconv = torch.nn.ConvTranspose2d(3, 64, (4, 4), stride=3, padding=1)     # case 1: wrong output
        self.tconv = torch.nn.ConvTranspose2d(3, 64, (3, 3), stride=2, padding=1)       # case 2: segmentation fault

    def forward(self, x):
        result = self.tconv(x)

        return result


def main():
    os.makedirs("tflite", exist_ok=True)
    model = dummy_network()
    model.eval()

    dummy_name = "dummy_tconv"
    onnx_save_path = f"tflite/{dummy_name}.onnx"
    temp_tflite = "tflite/model_float32.tflite"
    tflite_save_path = f"tflite/{dummy_name}.tflite"

    # y1, x1, y2, x2
    dummy_x = torch.randn(1, 3, 224, 224, dtype=torch.float32)

    torch.onnx.export(model,
                      args=(dummy_x,),
                      f=onnx_save_path,
                      input_names=["x"],
                      output_names=["result"],
                      opset_version=11)

    # get onnx output
    # -----------------------------------------------------------------------------------------------
    onnx_session = onnxruntime.InferenceSession(onnx_save_path, providers=['CPUExecutionProvider'])
    onnx_inputs = dict(x=dummy_x.numpy())
    onnx_outputs = onnx_session.run(None, onnx_inputs)

    convert(onnx_save_path, output_folder_path="tflite")
    shutil.move(temp_tflite, tflite_save_path)

    # get tflite output
    # -----------------------------------------------------------------------------------------------
    tflite_onnx2tf = tf.lite.Interpreter(model_path=tflite_save_path)
    tflite_onnx2tf.allocate_tensors()
    tflite_onnx2tf.set_tensor(tflite_onnx2tf.get_input_details()[0]['index'],
                              rearrange(dummy_x.numpy(), "n c h w -> n h w c"))
    tflite_onnx2tf.invoke()
    tflite_output = [tflite_onnx2tf.get_tensor(i['index']) for i in tflite_onnx2tf.get_output_details()][0]

    np.testing.assert_allclose(rearrange(tflite_output, "n h w c -> n c h w"), onnx_outputs[0], atol=1e-3, rtol=0.5)

    print("convert done")

    return


if __name__ == "__main__":
    main()

Parameter Replacement JSON

N/A

Description

  1. Purpose: Personal development

  2. What: Converted tflite contains wrong padding mode which occurs wrong output or segmentation fault when stride is larger than 2 and padding is larger than 0.

  3. How:

    pad_mode = 'VALID'
    if auto_pad == 'NOTSET':
    if graph_node_input_shape[2:] == graph_node_output_shape[2:]:
    pad_mode = "SAME"
    else:
    # TODO: check for valid explicit pads.
    pad_mode = 'VALID'

    In ConvTranspose.py line number 105, padding mode is set to SAME by comparing input shape and output shape. However, output shape is not same with input shape when stride is not 1.
    With the code in above section, the outputs between onnx and tflite are different. Even worse, sometimes converted tflite failed to invoke due to segmentation fault, for example, when kernel is (3, 3), stride is (2, 2), padding is (1, 1).
    By replacing line number 105 with [int((s * (i - 1) + k - o) / 2) for s, i, k, o in zip(strides, graph_node_input_shape[2:], kernel_shape, graph_node_output_shape[2:])] == pads[1:3], the error is gone in some cases. But this is not enough, there should be more lines of code for comparing given padding value and calculated padding value to check padding mode is SAME or not.

[nanodet-plus] Conv layer shape wrong

Issue Type

Others

onnx2tf version number

1.1.22

Download URL for ONNX

https://drive.google.com/file/d/1HyUDTYSTgS7_Gs29SxUSoxs3irqvIsXM/view?usp=sharing

Parameter Replacement JSON

{}

Description

  1. Purpose: Personal development
  2. What:
    Running script: onnx2tf -i nanodet-plus-m_416.onnx -oh5 -k "data"
    Issue log:
INFO: onnx_op_type: Conv onnx_op_name: /backbone/stage2/stage2.1/branch2/branch2.0/Conv
INFO: input_name.1: /backbone/stage2/stage2.1/Split_output_1 shape: [1, 58, 52, 52] dtype: float32
INFO: input_name.2: onnx::Conv_1392 shape: [58, 58, 1, 1] dtype: <class 'numpy.float32'>
INFO: input_name.3: onnx::Conv_1393 shape: [58] dtype: <class 'numpy.float32'>
INFO: output_name.1: /backbone/stage2/stage2.1/branch2/branch2.0/Conv_output_0 shape: [1, 58, 52, 52] dtype: float32
ERROR: The trace log is below.
Traceback (most recent call last):
  File "/mnt/hdd10tb/Users/thaovu/.conda/envs/onnx2tf/lib/python3.8/site-packages/onnx2tf/utils/common_functions.py", line 261, in print_wrapper_func
    result = func(*args, **kwargs)
  File "/mnt/hdd10tb/Users/thaovu/.conda/envs/onnx2tf/lib/python3.8/site-packages/onnx2tf/utils/common_functions.py", line 323, in inverted_operation_enable_disable_wrapper_func
    result = func(*args, **kwargs)
  File "/mnt/hdd10tb/Users/thaovu/.conda/envs/onnx2tf/lib/python3.8/site-packages/onnx2tf/ops/Conv.py", line 153, in make_node
    tf.nn.convolution(
  File "/mnt/hdd10tb/Users/thaovu/.conda/envs/onnx2tf/lib/python3.8/site-packages/tensorflow/python/util/traceback_utils.py", line 153, in error_handler
    raise e.with_traceback(filtered_tb) from None
  File "/mnt/hdd10tb/Users/thaovu/.conda/envs/onnx2tf/lib/python3.8/site-packages/keras/layers/core/tf_op_layer.py", line 119, in handle
    return TFOpLambda(op)(*args, **kwargs)
  File "/mnt/hdd10tb/Users/thaovu/.conda/envs/onnx2tf/lib/python3.8/site-packages/keras/utils/traceback_utils.py", line 70, in error_handler
    raise e.with_traceback(filtered_tb) from None
ValueError: Exception encountered when calling layer "tf.nn.convolution_4" (type TFOpLambda).

Depth of input (52) is not a multiple of input depth of filter (58) for '{{node tf.nn.convolution_4/convolution}} = Conv2D[T=DT_FLOAT, data_format="NHWC", dilations=[1, 1, 1, 1], explicit_paddings=[], padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true](Placeholder, tf.nn.convolution_4/convolution_internal/filters)' with input shapes: [1,58,52,52], [1,1,58,58].

Question 1: Shape of onnx::Conv_1393 layer must be [58,58,1,1], but in here it is [1,1,58,58]. Any cause from data_format?
Question 2: I set -k "data" but data_format of this layer is NHWC. Correct me if I wrong
3. How:
Firstly, I ran without -k option and get the same issue, I tried to use -k but still the the same
I also check parameter replacement but cant find suitable solution.
4. Why: I wanna convert NanoDet-plus-m model to Keras (h5 format)
5. Resources: Pytorch model from NanoDet model zoo https://github.com/RangiLyu/nanodet
Thank you for your hard working. Your projects are very useful <3

[YOLOX-X] [Faster-RCNN] [YOLOv7] TensorFlow aborts when exporting a model with NMS to Keras (.h5)

Issue Type

Others

onnx2tf version number

1.5.29

onnx version number

1.13.0

tensorflow version number

2.10.0

Download URL for ONNX

https://s3.ap-northeast-2.wasabisys.com/temp-models/onnx2tf_146/yolox_x.onnx
https://github.com/PINTO0309/onnx2tf/releases/download/1.1.28/faster_rcnn-10.onnx
https://github.com/PINTO0309/onnx2tf/releases/download/1.1.28/yolov7_tiny_head_0.768_post_480x640.onnx

Parameter Replacement JSON

N/A

Description

  1. Personal
  2. TensorFlow aborts when exporting a model with NMS to Keras (.h5)
    onnx2tf -i yolox_x.onnx -oh5
    
    ValueError: This Keras op layer was generated from <function non_max_suppression at 0x7fd6a96ff1f0>,
    a method that is not publicly exposed in the TensorFlow API.
    This may have happened if the method was explicitly decorated to add dispatching support,
    and it was used during Functional model construction.
    To ensure cross-version compatibility of Keras models that use op layers,
    only op layers produced from public TensorFlow API symbols can be serialized.
    
    image

[human_segmentation_pphumanseg_2021oct] `-cotof` option is used, ValueError: Output tensors of a Functional model must be the output of a TensorFlow Layer

Issue Type

Others

onnx2tf version number

1.5.7

onnx version number

1.13.0

tensorflow version number

2.10.0

Download URL for ONNX

https://github.com/PINTO0309/onnx2tf/releases/download/1.1.27/human_segmentation_pphumanseg_2021oct.onnx

Parameter Replacement JSON

{
  "format_version": 1,
  "operations": [
    {
      "op_name": "Resize_0",
      "param_target": "inputs",
      "param_name": "Concat_0",
      "values": [48,48]
    },
    {
      "op_name": "Resize_1",
      "param_target": "inputs",
      "param_name": "Concat_1",
      "values": [48,48]
    },
    {
      "op_name": "Resize_2",
      "param_target": "inputs",
      "param_name": "Concat_2",
      "values": [48,48]
    },
    {
      "op_name": "Resize_3",
      "param_target": "inputs",
      "param_name": "Concat_3",
      "values": [24,24]
    },
    {
      "op_name": "Resize_4",
      "param_target": "inputs",
      "param_name": "Concat_4",
      "values": [48,48]
    },
    {
      "op_name": "Resize_5",
      "param_target": "inputs",
      "param_name": "Concat_5",
      "values": [48,48]
    },
    {
      "op_name": "Resize_6",
      "param_target": "inputs",
      "param_name": "Concat_6",
      "values": [48,48]
    },
    {
      "op_name": "Resize_7",
      "param_target": "inputs",
      "param_name": "Concat_7",
      "values": [24,24]
    },
    {
      "op_name": "Resize_8",
      "param_target": "inputs",
      "param_name": "Concat_8",
      "values": [24,24]
    },
    {
      "op_name": "Resize_9",
      "param_target": "inputs",
      "param_name": "Concat_9",
      "values": [12,12]
    },
    {
      "op_name": "Resize_10",
      "param_target": "inputs",
      "param_name": "Concat_10",
      "values": [48,48]
    },
    {
      "op_name": "Resize_11",
      "param_target": "inputs",
      "param_name": "Concat_11",
      "values": [48,48]
    },
    {
      "op_name": "Resize_12",
      "param_target": "inputs",
      "param_name": "Concat_12",
      "values": [48,48]
    },
    {
      "op_name": "Resize_13",
      "param_target": "inputs",
      "param_name": "Concat_14",
      "values": [192,192]
    },
    {
      "op_name": "Reshape_0",
      "param_target": "outputs",
      "param_name": "Reshape_0",
      "post_process_transpose_perm": [0,2,3,1]
    },
    {
      "op_name": "Reshape_1",
      "param_target": "outputs",
      "param_name": "Reshape_1",
      "post_process_transpose_perm": [0,2,3,1]
    },
    {
      "op_name": "Transpose_0",
      "param_target": "attributes",
      "param_name": "perm",
      "values": [0,1,2,3]
    },
    {
      "op_name": "Transpose_1",
      "param_target": "attributes",
      "param_name": "perm",
      "values": [0,1,2,3]
    },
    {
      "op_name": "Softmax_0",
      "param_target": "attributes",
      "param_name": "axis",
      "values": 3
    }
  ]
}

Description

If the output layer to be verified is a Constant value, Keras appears to Abort. Therefore, we believe that Constant and ConstantOfShape should be excluded from the validation target. This should not be determined by the type of OP, but rather by whether it is an np.ndarray, in order to make a generic description.

onnx2tf -i human_segmentation_pphumanseg_2021oct.onnx -prf replace.json -cotof -cotoa 1e-1
ValueError: Output tensors of a Functional model must be the output of a TensorFlow `Layer` (thus holding past layer metadata). Found: [[[[-4.45949372e-05  2.32167356e-03  3.76565661e-03 ...

image

[ConvNext] Add layer shape wrong

Issue Type

Others

onnx2tf version number

1.1.24

Download URL for ONNX

https://drive.google.com/file/d/15TV0y-Go0VYdyV4FKNhciVDWa3frECfm/view?usp=sharing

Parameter Replacement JSON

{
  "format_version": 1,
  "operations": [
    {
      "op_name": "/encoder/stages.0/layers/layers.0/Transpose_1",
      "param_target": "outputs",
      "param_name": "/encoder/stages.0/layers/layers.0/Transpose_1_output_0",
      "post_process_transpose_perm": [0,2,3,1]
    }
  ]
}

Description

Hi - thanks for the super useful library.

I'm having trouble diagnosing the problem with an add op - I've narrowed it down to the op, but can't seem to get the right adjustment in the parameter_replacement.json file to fix it, and am not sure exactly how to spot the issue in the attached screenshots.

image

image

Many thanks!

Seems can not handle Concat op

Issue Type

Others

Download URL for ONNX

model.zip

Description

I am trying to convert above model.onnx to tflite format by executing onnx2tf -i EyeNet.onnx, but it throw error at the concat op:
image
log:

Model loaded ========================================================================

Model convertion started ============================================================
INFO: input_op_name: input shape: [1, 1, 192, 192] dtype: float32

INFO: onnx_op_type: Conv onnx_op_name: Conv_0
INFO: input_name.1: input shape: [1, 1, 192, 192] dtype: float32
INFO: input_name.2: onnx::Conv_456 shape: [8, 1, 3, 3] dtype: <class 'numpy.float32'>
INFO: input_name.3: onnx::Conv_457 shape: [8] dtype: <class 'numpy.float32'>
INFO: output_name.1: input.4 shape: None dtype: None
INFO: tf_op_type: convolution_v2
INFO: input.1.input: name: tf.compat.v1.pad/Pad:0 shape: (1, 194, 194, 1) dtype: <dtype: 'float32'>
INFO: input.2.weights: shape: (3, 3, 1, 8) dtype: float32
INFO: input.3.bias: shape: (8,) dtype: float32
INFO: output.1.output: name: tf.math.add/Add:0 shape: (1, 96, 96, 8) dtype: <dtype: 'float32'>

INFO: onnx_op_type: LeakyRelu onnx_op_name: LeakyRelu_1
INFO: input_name.1: input.4 shape: None dtype: None
INFO: output_name.1: onnx::Conv_282 shape: None dtype: None
INFO: tf_op_type: leaky_relu
INFO: input.1.features: name: tf.math.add/Add:0 shape: (1, 96, 96, 8) dtype: <dtype: 'float32'>
INFO: input.2.alpha: val: 0.10000000149011612
INFO: output.1.output: name: tf.nn.leaky_relu/LeakyRelu:0 shape: (1, 96, 96, 8) dtype: <dtype: 'float32'>

INFO: onnx_op_type: Conv onnx_op_name: Conv_2
INFO: input_name.1: onnx::Conv_282 shape: None dtype: None
INFO: input_name.2: onnx::Conv_459 shape: [8, 1, 3, 3] dtype: <class 'numpy.float32'>
INFO: input_name.3: onnx::Conv_460 shape: [8] dtype: <class 'numpy.float32'>
INFO: output_name.1: input.12 shape: None dtype: None
INFO: tf_op_type: depthwise_conv2d_v2
INFO: input.1.input: name: tf.compat.v1.pad_1/Pad:0 shape: (1, 98, 98, 8) dtype: <dtype: 'float32'>
INFO: input.2.weights: shape: (3, 3, 8, 1) dtype: <dtype: 'float32'>
INFO: input.3.bias: shape: (8,) dtype: float32
INFO: output.1.output: name: tf.math.add_1/Add:0 shape: (1, 48, 48, 8) dtype: <dtype: 'float32'>

INFO: onnx_op_type: LeakyRelu onnx_op_name: LeakyRelu_3
INFO: input_name.1: input.12 shape: None dtype: None
INFO: output_name.1: onnx::Conv_285 shape: None dtype: None
INFO: tf_op_type: leaky_relu
INFO: input.1.features: name: tf.math.add_1/Add:0 shape: (1, 48, 48, 8) dtype: <dtype: 'float32'>
INFO: input.2.alpha: val: 0.10000000149011612
INFO: output.1.output: name: tf.nn.leaky_relu_1/LeakyRelu:0 shape: (1, 48, 48, 8) dtype: <dtype: 'float32'>

INFO: onnx_op_type: Conv onnx_op_name: Conv_4
INFO: input_name.1: onnx::Conv_285 shape: None dtype: None
INFO: input_name.2: onnx::Conv_462 shape: [16, 8, 1, 1] dtype: <class 'numpy.float32'>
INFO: input_name.3: onnx::Conv_463 shape: [16] dtype: <class 'numpy.float32'>
INFO: output_name.1: input.20 shape: None dtype: None
INFO: tf_op_type: convolution_v2
INFO: input.1.input: name: tf.nn.leaky_relu_1/LeakyRelu:0 shape: (1, 48, 48, 8) dtype: <dtype: 'float32'>
INFO: input.2.weights: shape: (1, 1, 8, 16) dtype: float32
INFO: input.3.bias: shape: (16,) dtype: float32
INFO: output.1.output: name: tf.math.add_2/Add:0 shape: (1, 48, 48, 16) dtype: <dtype: 'float32'>

INFO: onnx_op_type: LeakyRelu onnx_op_name: LeakyRelu_5
INFO: input_name.1: input.20 shape: None dtype: None
INFO: output_name.1: onnx::Conv_288 shape: None dtype: None
INFO: tf_op_type: leaky_relu
INFO: input.1.features: name: tf.math.add_2/Add:0 shape: (1, 48, 48, 16) dtype: <dtype: 'float32'>
INFO: input.2.alpha: val: 0.10000000149011612
INFO: output.1.output: name: tf.nn.leaky_relu_2/LeakyRelu:0 shape: (1, 48, 48, 16) dtype: <dtype: 'float32'>

INFO: onnx_op_type: Conv onnx_op_name: Conv_6
INFO: input_name.1: onnx::Conv_288 shape: None dtype: None
INFO: input_name.2: onnx::Conv_465 shape: [16, 1, 3, 3] dtype: <class 'numpy.float32'>
INFO: input_name.3: onnx::Conv_466 shape: [16] dtype: <class 'numpy.float32'>
INFO: output_name.1: input.28 shape: None dtype: None
INFO: tf_op_type: depthwise_conv2d_v2
INFO: input.1.input: name: tf.compat.v1.pad_2/Pad:0 shape: (1, 50, 50, 16) dtype: <dtype: 'float32'>
INFO: input.2.weights: shape: (3, 3, 16, 1) dtype: <dtype: 'float32'>
INFO: input.3.bias: shape: (16,) dtype: float32
INFO: output.1.output: name: tf.math.add_3/Add:0 shape: (1, 48, 48, 16) dtype: <dtype: 'float32'>

INFO: onnx_op_type: LeakyRelu onnx_op_name: LeakyRelu_7
INFO: input_name.1: input.28 shape: None dtype: None
INFO: output_name.1: onnx::Conv_291 shape: None dtype: None
INFO: tf_op_type: leaky_relu
INFO: input.1.features: name: tf.math.add_3/Add:0 shape: (1, 48, 48, 16) dtype: <dtype: 'float32'>
INFO: input.2.alpha: val: 0.10000000149011612
INFO: output.1.output: name: tf.nn.leaky_relu_3/LeakyRelu:0 shape: (1, 48, 48, 16) dtype: <dtype: 'float32'>

INFO: onnx_op_type: Conv onnx_op_name: Conv_8
INFO: input_name.1: onnx::Conv_291 shape: None dtype: None
INFO: input_name.2: onnx::Conv_468 shape: [16, 16, 1, 1] dtype: <class 'numpy.float32'>
INFO: input_name.3: onnx::Conv_469 shape: [16] dtype: <class 'numpy.float32'>
INFO: output_name.1: input.36 shape: None dtype: None
INFO: tf_op_type: convolution_v2
INFO: input.1.input: name: tf.nn.leaky_relu_3/LeakyRelu:0 shape: (1, 48, 48, 16) dtype: <dtype: 'float32'>
INFO: input.2.weights: shape: (1, 1, 16, 16) dtype: float32
INFO: input.3.bias: shape: (16,) dtype: float32
INFO: output.1.output: name: tf.math.add_4/Add:0 shape: (1, 48, 48, 16) dtype: <dtype: 'float32'>

INFO: onnx_op_type: LeakyRelu onnx_op_name: LeakyRelu_9
INFO: input_name.1: input.36 shape: None dtype: None
INFO: output_name.1: onnx::Conv_294 shape: None dtype: None
INFO: tf_op_type: leaky_relu
INFO: input.1.features: name: tf.math.add_4/Add:0 shape: (1, 48, 48, 16) dtype: <dtype: 'float32'>
INFO: input.2.alpha: val: 0.10000000149011612
INFO: output.1.output: name: tf.nn.leaky_relu_4/LeakyRelu:0 shape: (1, 48, 48, 16) dtype: <dtype: 'float32'>

INFO: onnx_op_type: Conv onnx_op_name: Conv_10
INFO: input_name.1: onnx::Conv_294 shape: None dtype: None
INFO: input_name.2: onnx::Conv_471 shape: [8, 16, 3, 3] dtype: <class 'numpy.float32'>
INFO: input_name.3: onnx::Conv_472 shape: [8] dtype: <class 'numpy.float32'>
INFO: output_name.1: onnx::Concat_470 shape: None dtype: None
INFO: tf_op_type: convolution_v2
INFO: input.1.input: name: tf.compat.v1.pad_3/Pad:0 shape: (1, 50, 50, 16) dtype: <dtype: 'float32'>
INFO: input.2.weights: shape: (3, 3, 16, 8) dtype: float32
INFO: input.3.bias: shape: (8,) dtype: float32
INFO: output.1.output: name: tf.math.add_5/Add:0 shape: (1, 48, 48, 8) dtype: <dtype: 'float32'>

INFO: onnx_op_type: Conv onnx_op_name: Conv_11
INFO: input_name.1: onnx::Conv_294 shape: None dtype: None
INFO: input_name.2: onnx::Conv_474 shape: [4, 16, 3, 3] dtype: <class 'numpy.float32'>
INFO: input_name.3: onnx::Conv_475 shape: [4] dtype: <class 'numpy.float32'>
INFO: output_name.1: input.48 shape: None dtype: None
INFO: tf_op_type: convolution_v2
INFO: input.1.input: name: tf.compat.v1.pad_4/Pad:0 shape: (1, 50, 50, 16) dtype: <dtype: 'float32'>
INFO: input.2.weights: shape: (3, 3, 16, 4) dtype: float32
INFO: input.3.bias: shape: (4,) dtype: float32
INFO: output.1.output: name: tf.math.add_6/Add:0 shape: (1, 48, 48, 4) dtype: <dtype: 'float32'>

INFO: onnx_op_type: LeakyRelu onnx_op_name: LeakyRelu_12
INFO: input_name.1: input.48 shape: None dtype: None
INFO: output_name.1: onnx::Conv_299 shape: None dtype: None
INFO: tf_op_type: leaky_relu
INFO: input.1.features: name: tf.math.add_6/Add:0 shape: (1, 48, 48, 4) dtype: <dtype: 'float32'>
INFO: input.2.alpha: val: 0.10000000149011612
INFO: output.1.output: name: tf.nn.leaky_relu_5/LeakyRelu:0 shape: (1, 48, 48, 4) dtype: <dtype: 'float32'>

INFO: onnx_op_type: Conv onnx_op_name: Conv_13
INFO: input_name.1: onnx::Conv_299 shape: None dtype: None
INFO: input_name.2: onnx::Conv_477 shape: [4, 4, 3, 3] dtype: <class 'numpy.float32'>
INFO: input_name.3: onnx::Conv_478 shape: [4] dtype: <class 'numpy.float32'>
INFO: output_name.1: onnx::Concat_476 shape: None dtype: None
INFO: tf_op_type: convolution_v2
INFO: input.1.input: name: tf.compat.v1.pad_5/Pad:0 shape: (1, 50, 50, 4) dtype: <dtype: 'float32'>
INFO: input.2.weights: shape: (3, 3, 4, 4) dtype: float32
INFO: input.3.bias: shape: (4,) dtype: float32
INFO: output.1.output: name: tf.math.add_7/Add:0 shape: (1, 48, 48, 4) dtype: <dtype: 'float32'>

INFO: onnx_op_type: Conv onnx_op_name: Conv_14
INFO: input_name.1: onnx::Conv_299 shape: None dtype: None
INFO: input_name.2: onnx::Conv_480 shape: [4, 4, 3, 3] dtype: <class 'numpy.float32'>
INFO: input_name.3: onnx::Conv_481 shape: [4] dtype: <class 'numpy.float32'>
INFO: output_name.1: input.60 shape: None dtype: None
INFO: tf_op_type: convolution_v2
INFO: input.1.input: name: tf.compat.v1.pad_6/Pad:0 shape: (1, 50, 50, 4) dtype: <dtype: 'float32'>
INFO: input.2.weights: shape: (3, 3, 4, 4) dtype: float32
INFO: input.3.bias: shape: (4,) dtype: float32
INFO: output.1.output: name: tf.math.add_8/Add:0 shape: (1, 48, 48, 4) dtype: <dtype: 'float32'>

INFO: onnx_op_type: LeakyRelu onnx_op_name: LeakyRelu_15
INFO: input_name.1: input.60 shape: None dtype: None
INFO: output_name.1: onnx::Conv_304 shape: None dtype: None
INFO: tf_op_type: leaky_relu
INFO: input.1.features: name: tf.math.add_8/Add:0 shape: (1, 48, 48, 4) dtype: <dtype: 'float32'>
INFO: input.2.alpha: val: 0.10000000149011612
INFO: output.1.output: name: tf.nn.leaky_relu_6/LeakyRelu:0 shape: (1, 48, 48, 4) dtype: <dtype: 'float32'>

INFO: onnx_op_type: Conv onnx_op_name: Conv_16
INFO: input_name.1: onnx::Conv_304 shape: None dtype: None
INFO: input_name.2: onnx::Conv_483 shape: [4, 4, 3, 3] dtype: <class 'numpy.float32'>
INFO: input_name.3: onnx::Conv_484 shape: [4] dtype: <class 'numpy.float32'>
INFO: output_name.1: onnx::Concat_482 shape: None dtype: None
INFO: tf_op_type: convolution_v2
INFO: input.1.input: name: tf.compat.v1.pad_7/Pad:0 shape: (1, 50, 50, 4) dtype: <dtype: 'float32'>
INFO: input.2.weights: shape: (3, 3, 4, 4) dtype: float32
INFO: input.3.bias: shape: (4,) dtype: float32
INFO: output.1.output: name: tf.math.add_9/Add:0 shape: (1, 48, 48, 4) dtype: <dtype: 'float32'>

INFO: onnx_op_type: Concat onnx_op_name: Concat_17
INFO: input_name.1: onnx::Concat_470 shape: None dtype: None
INFO: input_name.2: onnx::Concat_476 shape: None dtype: None
INFO: input_name.3: onnx::Concat_482 shape: None dtype: None
INFO: output_name.1: input.68 shape: None dtype: None
ERROR: The trace log is below.
Traceback (most recent call last):
  File "c:\users\user\appdata\local\programs\python\python38\lib\site-packages\onnx2tf\utils\common_functions.py", line 176, in print_wrapper_func
    result = func(*args, **kwargs)
  File "c:\users\user\appdata\local\programs\python\python38\lib\site-packages\onnx2tf\utils\common_functions.py", line 225, in inverted_operation_enable_disable_wrapper_func
    result = func(*args, **kwargs)
  File "c:\users\user\appdata\local\programs\python\python38\lib\site-packages\onnx2tf\ops\Concat.py", line 61, in make_node
    tensor_rank=len(shape),
TypeError: object of type 'NoneType' has no len()

In the savedModel, proper signatures are not getting added (issue exists for resnet18 as well)

Issue Type

Others

Download URL for ONNX

Default resnet18-v1-7.onnx has the same issue

Description

I used example resnet onnx model... The converted saved model doesnt have required Signatures. I was trying. to use tensorflowjs_converter to convert it into tfjs model and it was failing in the missing signatures.

heres is the. output. of saved_model_cli for resnet:

MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:

signature_def['__saved_model_init_op']:
The given SavedModel SignatureDef contains the following input(s):
The given SavedModel SignatureDef contains the following output(s):
outputs['__saved_model_init_op'] tensor_info:
dtype: DT_INVALID
shape: unknown_rank
name: NoOp
Method name is:

Incorrect conversion of DepthToSpace in CRD mode

Issue Type

Others

onnx2tf version number

1.3.14

onnx version number

1.12.0

tensorflow version number

2.10.0

Download URL for ONNX

https://drive.google.com/file/d/1XeM7XpjOsL37qAdpWa1j6x9KMzYbiHgW/view?usp=sharing

Parameter Replacement JSON

{}

Description

  1. Personal development
  2. Pixelshuffle layer in PyTorch (CRD mode for ONNX) is not converting correctly to TF.
import torch
import tensorflow as tf
import numpy as np

from onnx2tf import convert

class PixelShuffle(torch.nn.Module):
    def __init__(self, upscale_factor=2):
        super().__init__()
        self._upscale_factor = upscale_factor
    def forward(self, inp):
        out = torch.nn.functional.pixel_shuffle(inp, upscale_factor=self._upscale_factor)
        return out

upscale_factor = 2
b, c, h, w = 2, 3, 8, 8
max_val = b*c*h*w*upscale_factor*upscale_factor
ps = PixelShuffle(upscale_factor=upscale_factor)
inp = (torch.arange(max_val)/max_val).view(b, c*upscale_factor*upscale_factor, h, w)
out_torch = ps(inp)

_ = torch.onnx.export(ps,
                      torch.randn_like(inp),
                      "ps.onnx",
                      opset_version=17,
                      export_params=True,
                      verbose=False,
                      input_names=["input"],
                      output_names=["output"])

convert(input_onnx_file_path="ps.onnx",
       output_folder_path="ps_tf",
       non_verbose=True)

interpreter = tf.lite.Interpreter(model_path="ps_tf/model_float32.tflite")
interpreter.allocate_tensors()

# Get input and output tensors
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()

input_data = (inp).permute(0,2,3,1).cpu().numpy()

interpreter.set_tensor(input_details[0]['index'], input_data)

interpreter.invoke()

out_tf = interpreter.get_tensor(output_details[0]['index'])

print(np.allclose(out_torch.permute(0,2,3,1).numpy(), out_tf))

This gives False output.
3. I fixed the bug in the DepthToSpace op. Creating Pull Request now.

protobuf version issue when trying to install packages in colab

Issue Type

Others

onnx2tf version number

N/A

onnx version number

N/A

tensorflow version number

2.10.0

Download URL for ONNX

N/A

Parameter Replacement JSON

N/A

Description

I tried to install onnx2tf in colab as instructed

!sudo add-apt-repository -y ppa:deadsnakes/ppa
...
!python3.9 -m pip install tensorflow==2.10.0 \
...
  && python3.9 -m pip install -U onnx2tf \
  && python3.9 -m pip install -U protobuf==3.20.3

But I got the following error

tensorflow 2.10.0 requires protobuf<3.20,>=3.9.2, but you have protobuf 3.20.3 which is incompatible.
tensorboard 2.10.1 requires protobuf<3.20,>=3.9.2, but you have protobuf 3.20.3 which is incompatible.

Is that expected?

Thanks!

[YOLOX-X] [Faster-RCNN] Error when outputting to h5 file. `AttributeError: 'numpy.dtype[int64]' object has no attribute 'item'`

Issue Type

Others

onnx2tf version number

1.5.28

onnx version number

1.13.0

tensorflow version number

2.10.0

Download URL for ONNX

https://s3.ap-northeast-2.wasabisys.com/temp-models/onnx2tf_146/yolox_x.onnx
https://github.com/PINTO0309/onnx2tf/releases/download/1.1.28/faster_rcnn-10.onnx

Parameter Replacement JSON

N/A

Description

  1. Persnal
  2. Error when outputting to Keras(h5) file
    onnx2tf -i yolox_x.onnx -oh5
    
    'numpy.dtype[int64]' object has no attribute 'item'
      File "/home/xxxxx/git/onnx2tf/onnx2tf/onnx2tf.py", line 744, in convert
        model.save(f'{output_folder_path}/{output_file_name}_float32.h5')
      File "/home/xxxxx/git/onnx2tf/onnx2tf/onnx2tf.py", line 1735, in main
        model = convert(
      File "/home/xxxxx/git/onnx2tf/onnx2tf/onnx2tf.py", line 1786, in <module>
        main()
    AttributeError: 'numpy.dtype[int64]' object has no attribute 'item'
    
    The problem does not occur in YOLOX-S. Thus, I see a problem on the part of Post-Process.
    https://s3.ap-northeast-2.wasabisys.com/temp-models/onnx2tf_146/yolox_s.onnx

Batchnorm after convtranspose converted to wrong bias

Issue Type

Others

onnx2tf version number

1.1.23

Download URL for ONNX

gist for reproduction : https://colab.research.google.com/gist/Hyunseok-Kim0/41920f44e994caa96271d604e5831775/onnx2tf_batchnorm_test.ipynb

Parameter Replacement JSON

N/A

Description

First of all, thank you for sharing another wonderful tool. onnx2tf looks very stable and useful.

  1. Purpose: Personal development

  2. What : I tried to convert centernet (https://github.com/xingyizhou/CenterNet) to tflite. It was very impressive that initial conversion from onnx to tflite was completed without an error using onnx2tf. However, I found that generated tflite using onnx2tf shows different output comparing with the one generated from openvino2tensorflow. Most of network components were identical except the TransposeConv layer, which was followed by Batchnorm layer.

Screenshot from 2022-11-16 16-24-20
part of tflite from openvino2tensorflow (generates correct output)

Screenshot from 2022-11-16 16-24-39-1
part of tflite from onnx2tf (generates wrong output)

As you see in the figures above, the TransposeConv layer from openvino2tensorflow has no bias while onnx2tf version has some bias. Also, the Add layers which is converted from Batchnorm layer has different values which are shown in the right side of images.

I think this problem is related with Batchnorm layer, but couldn't find solution.

  1. How: please use the gist to reproduce this problem. (https://colab.research.google.com/gist/Hyunseok-Kim0/41920f44e994caa96271d604e5831775/onnx2tf_batchnorm_test.ipynb)
    The gist generates dummy pytorch network which contains only two layers, (Conv + Batchnorm) or (ConvTranspose + Batchnorm). After the conversion to tflite, the inference output is compared each other.
    For the network with (Conv + Batchnorm), it shows same network structure and same outputs. However, (ConvTranspose + Batchnorm) shows different outputs.

Screenshot from 2022-11-16 17-01-22
target onnx

Screenshot from 2022-11-16 17-01-03
tflite from openvino2tensorflow

Screenshot from 2022-11-16 17-01-13
tflite from onnx2tf

In tflite from onnx2tf, bias of Batchnrom layer is moved to Add layer, and another unknown bias value is added to TransposeConv layer. In tflite from openvino2tensorflow, TransposeConv layer has no bias and unknown value is added in Add layer.

Also, I found there is typo in ConvTranspose.py in line 68. tf_layers_dict[input_weights.name] instead of tf_layers_dict[input_bias.name], but it does not look like cause of this problem..

input_tensor = tf_layers_dict[input_tensor.name]['tf_node'] \
if isinstance(input_tensor, gs.Variable) else input_tensor
input_tensor_shape = input_tensor.shape
input_weights = tf_layers_dict[input_weights.name]['tf_node'] \
if isinstance(input_weights, gs.Variable) else input_weights
input_bias = tf_layers_dict[input_weights.name]['tf_node'] \
if isinstance(input_bias, gs.Variable) else input_bias

I will keep try to find out the solution, please share your idea about this problem.

[silero_vad] Question about Pad

Issue Type

Others

onnx2tf version number

1.2.4

Download URL for ONNX

https://github.com/snakers4/silero-vad/blob/v3.1/files/silero_vad.onnx

Parameter Replacement JSON

{}

Description

  1. Purpose: Personal development
  2. What: Convert error occured
    Running script: onnx2tf -i silero_vad.onnx

INFO: onnx_op_type: Pad
INFO: input shape: [1, 1, 1, 512]
...
INFO: output shape: [1, 1, 1, 768]
INFO: tf_op_type: Pad
INFO: input shape: [1, 1, 1, 512]
...
INFO: output shape: [1, 1, 257, 512]

ValueError: Exception encountered when calling layer "tf.squeeze"
Can not squeeze dim[3], expected a dimension of 1, got 512

So why the Pad's behavior is not the same during conversion and what should I do?

  1. How: I've tried Parameter replacement but the error still occured. So I think it is related to the dimensional decompression after Reshape.
  2. Why: Because I want tflite.

Not able to reshape input in replace.json

Issue Type

Others

onnx2tf version number

1.0.27

Download URL for ONNX

https://drive.google.com/file/d/1Wkb-xi8NBICJttIyctUL3m7ETGHv2BZ8/view

Parameter Replacement JSON

{
    "format_version": 1,
    "operations": [
        {
            "op_name": "Flatten_10",
            "param_target": "inputs",
            "param_name": "input.16",
            "pre_process_transpose_perm": [3,1,2,0]
        },
        {
            "op_name": "Flatten_10",
            "param_target": "outputs",
            "param_name": "onnx::Gemm_31",
            "post_process_transpose_perm": [1, 0]
        }
    ]
}

Description

  1. School research project on TinyML.
  2. I have simplified the model with onnxsim, then I ran this command onnx2tf -i baseline_simplified.onnx -b 1

The onnx model has a Flatten layer. Input shape is ['batch_size', 32, 1, 3], output shape is ['batch_size', 96]. The OP get converted to tf.reshape, the input shape is (1, 1, 3, 32), output shape is (3, 32).
image

  1. I attempted to solve it with the Parameter Replacement JSON to tranpose both the input and output. The transpose for the output works, but the transpose for the input gives me this error.
    image

  2. I need the tflite model so that I can quantise it and deploy using TFLiteMicro.

  3. I have tried to use openvino library to convert onnx to openvino, then openvino to tf. I can convert but the output of tf model is very different from the onnx output.

Bug in `onnx_tf_tensor_validation`

Issue Type

Others

onnx2tf version number

1.5.23

onnx version number

1.12.0

tensorflow version number

2.10

Download URL for ONNX

https://github.com/PINTO0309/onnx2tf/releases/download/1.1.28/pidnet_S_cityscapes_192x320.onnx

Parameter Replacement JSON

N/A

Description

  1. Purpose: Personal development
  2. What: onnx_tf_tensor_validation use tensor shape to find out matching pairs between onnx and tflite outputs. Due to this logic, some outputs are failed to matched and generates wrong absolute error.

The output of global average pooling has shape of (1 x 512 x 1 x 1), and another average pooling output also have same shape and same value. When onnx_tf_tensor_validation is running, the output of global average pooling is failed to be matched with correct onnx output, generates Unmatched result as below.

Screenshot from 2023-01-20 17-06-40

Similar bug can be happen in other onnx files if some outputs have same shape and same value.

Bias from Conv2d operator seems to be added wrongly.

Issue Type

Others

onnx2tf version number

0.28

Download URL for ONNX

Added screenshots of the models to show bias values

Description

My model. is getting converted properly but I see results mismatch, so. trying to. figure out whats causing differences.
One of the possible issues is due to how. the bias added from Conv2D operator while conversion.
Adding screenshots of the bias in Conv2D. operator. from original. onnx model and corresponding conversion to tf. using the tool.
You can see the bias at index [1] is added in the last. Looks weird. Please correct me If I am wrong.

onnn-mod. l

tf-converted

[MobileFormer] Converted model outputs values mismatch with original ones.

Issue Type

Others

onnx2tf version number

1.4.2

onnx version number

1.12.0

tensorflow version number

2.10.1

Download URL for ONNX

9e: https://drive.google.com/file/d/1vGzO9MZGX-yGz6ATm4yHVJMASZACuy2t/view?usp=share_link
9t: https://drive.google.com/file/d/1Be-a7Pmo6auyAXHChAhC1qtzkbr5EytJ/view?usp=share_link
12e: https://drive.google.com/file/d/1NR-0Fm5q_ludb5MTWgDPzRzqDFKOsCkw/view?usp=share_link
12t: https://drive.google.com/file/d/1ReuXt4gpbq6O7mpUGudaBKjb6wI43ka6/view?usp=share_link

Parameter Replacement JSON

9e: https://drive.google.com/file/d/1av1dL5sWfjlghf2-jtmws-CR-Tcv_WIP/view?usp=share_link
9t: https://drive.google.com/file/d/18qB_VOW4Xp9PvCN4G-eRRnWhh9o1fF3F/view?usp=share_link
12e: https://drive.google.com/file/d/1T0nTkb5Szf9XjaNVVBTzzu5C9dgOURLd/view?usp=share_link
12t: https://drive.google.com/file/d/1JhH8K64PDJong4qkZbzyyJDKRExE7wlJ/view?usp=share_link

Description

Hi Master PINTO,

Following up on issue #103, the output values of the converted model are different from what is expected. I tried to export the pytorch model in both eval and training and both opset==9 and opset==12. For the models and parameter replacement files, the number stands for opset version, e stands for eval, and t stands for training. This issue happens in all scenarios. Please see this notebook https://drive.google.com/file/d/1mErgTLFiYGTgUnMRDcNp3HkOkSWa96aL/view?usp=share_link to replicate and for issue demonstration.

Also, I see that since you match every onnx operation with a basic tf operation, is there no way to restore those weights as trainable parameters? Thank you so much for keep digging in on this. If you need the original pytorch model or anything else I could assist on, let me know!

TypeError: EndVector() missing 1 required positional argument: 'vectorNumElems'

Issue Type

Others

onnx2tf version number

1.1.4

Download URL for ONNX

https://github.com/PINTO0309/onnx2tf/releases/download/1.1.3/lite-model_yamnet_classification_tflite_1.onnx

Parameter Replacement JSON

{
  "format_version": 1,
  "operations": [
    {
      "op_name": "tower0/network/layer1/conv/Relu6;tower0/network/layer1/conv/BatchNorm/FusedBatchNormV3;tower0/network/layer2/sepconv/BatchNorm/FusedBatchNormV3;tower0/network/layer2/sepconv/depthwise;tower0/network/layer1/conv/Conv2D_prequant__219",
      "param_target": "inputs",
      "param_name": "new_shape__513",
      "values": [1,96,64,1]
    }
  ]
}

Description

  1. Error when calling tflite converter.
    onnx2tf -i lite-model_yamnet_classification_tflite_1.onnx -prf replace.json
    TypeError: EndVector() missing 1 required positional argument: 'vectorNumElems'
  2. I upgraded flatbuffers by referring to the following.
    https://stackoverflow.com/questions/73442005/tflite-model-maker-in-colab-typeerror-endvector-missing-1-required-positional
    From:
      pip show flatbuffers
      Name: flatbuffers
      Version: 1.12
    
    To:
      pip install -U flatbuffers
    
      pip show flatbuffers
      Name: flatbuffers
      Version: 22.10.26
    
  3. Cannot convert to tflite.
  4. Solution
    pip install -U flatbuffers
    

[face_recognition_sface] [Query][tflite]The tool or cmdline to support NCHW between NHWC

Issue Type

Others

onnx2tf version number

1.0.48

Download URL for ONNX

Model:
1.face_recognition_sface_2021dec.onnx
2.https://github.com/opencv/opencv_zoo/blob/master/models/face_recognition_sface/face_recognition_sface_2021dec-act_int8-wt_int8-quantized.onnx

Parameter Replacement JSON

NA

Description

Hi @PINTO0309
Thanks for your job and it working when ONNX covert to tflite without quantizion.
for quantized ONNX model or tflite model, the tool seems not working for belwo error
ERROR: QLinearMul OP is not yet implemented.

so i would like to check with you
whether the tool support NCHW between NHWC convertion for tflite model?

Thanks

[TUNIT] conversion error, TypeError: 'Variable' object is not iterable

Issue Type

Others

onnx2tf version number

1.1.44

Download URL for ONNX

onnx/onnx-tensorflow#693
https://github.com/sayakpaul/Adventures-in-TensorFlow-Lite/releases/download/v0.2.0/tunit_onnx_files.tar.gz

Parameter Replacement JSON

{
  "format_version": 1,
  "operations": [
    {
      "op_name": "Reshape_136",
      "param_target": "outputs",
      "param_name": "194",
      "post_process_transpose_perm": [0,2,1]
    },
    {
      "op_name": "Reshape_143",
      "param_target": "outputs",
      "param_name": "205",
      "post_process_transpose_perm": [0,2,3,1]
    },
    {
      "op_name": "Reshape_170",
      "param_target": "outputs",
      "param_name": "234",
      "post_process_transpose_perm": [0,2,1]
    },
    {
      "op_name": "Reshape_177",
      "param_target": "outputs",
      "param_name": "245",
      "post_process_transpose_perm": [0,2,3,1]
    },
    {
      "op_name": "Reshape_206",
      "param_target": "outputs",
      "param_name": "276",
      "post_process_transpose_perm": [0,2,1]
    },
    {
      "op_name": "Reshape_213",
      "param_target": "outputs",
      "param_name": "287",
      "post_process_transpose_perm": [0,2,3,1]
    },
    {
      "op_name": "Reshape_240",
      "param_target": "outputs",
      "param_name": "316",
      "post_process_transpose_perm": [0,2,1]
    },
    {
      "op_name": "Reshape_247",
      "param_target": "outputs",
      "param_name": "327",
      "post_process_transpose_perm": [0,2,3,1]
    },
    {
      "op_name": "Reshape_303",
      "param_target": "outputs",
      "param_name": "385",
      "post_process_transpose_perm": [0,2,1]
    },
    {
      "op_name": "Reshape_310",
      "param_target": "outputs",
      "param_name": "396",
      "post_process_transpose_perm": [0,2,3,1]
    },
    {
      "op_name": "Reshape_364",
      "param_target": "outputs",
      "param_name": "452",
      "post_process_transpose_perm": [0,2,1]
    },
    {
      "op_name": "Reshape_371",
      "param_target": "outputs",
      "param_name": "463",
      "post_process_transpose_perm": [0,2,3,1]
    },
    {
      "op_name": "Reshape_425",
      "param_target": "outputs",
      "param_name": "519",
      "post_process_transpose_perm": [0,2,1]
    },
    {
      "op_name": "Reshape_432",
      "param_target": "outputs",
      "param_name": "530",
      "post_process_transpose_perm": [0,2,3,1]
    }
  ]
}

Description

1.Personal development
https://github.com/clovaai/tunit
image
2.TypeError: 'Variable' object is not iterable
3.replace.json
5.Error log

ERROR: The trace log is below.
Traceback (most recent call last):
  File "/home/xxxxx/git/onnx2tf/onnx2tf/utils/common_functions.py", line 262, in print_wrapper_func
    result = func(*args, **kwargs)
  File "/home/xxxxx/git/onnx2tf/onnx2tf/utils/common_functions.py", line 324, in inverted_operation_enable_disable_wrapper_func
    result = func(*args, **kwargs)
  File "/home/xxxxx/git/onnx2tf/onnx2tf/ops/BatchNormalization.py", line 41, in make_node
    graph_node.outputs = graph_node.outputs[0]
  File "/home/xxxxx/.local/lib/python3.8/site-packages/onnx_graphsurgeon/ir/node.py", line 89, in __setattr__
    getattr(self, name).extend(value)
  File "/home/xxxxx/.local/lib/python3.8/site-packages/onnx_graphsurgeon/util/misc.py", line 111, in extend
    super().extend(iterable)
TypeError: 'Variable' object is not iterable

[TODO] NMS implementation for GPU Delegate / TPU support

Issue Type

Others

onnx2tf version number

1.7.x

onnx version number

1.13.1

tensorflow version number

2.10.0

Download URL for ONNX

postprocess_8400.onnx.zip

Parameter Replacement JSON

N/A

Description

  1. Personal
  2. [TODO] NMS implementation for GPU Delegate support
    Generate full-scratch, redundant NMS blocks without NonMaxSuppressionV4. Perform this workaround only when the --replace_nonmaxsuppression_to_pseudo_nonmaxsuppression option is enabled. However, this model customization should originally be implemented on the PyTorch side of the PyTorch implementation that exports ONNX. Therefore, the full-scratch NMS processing group generated by this tool is a very specific custom that can only be used for very limited models.

Ref: Yolov7-tiny to TensorflowLite comversion results in a dynamic output model incompatible with TfLite Java API #159

[FastestDet] The absolute error of `AveragePool`, `GlobalAveragePool` is quite large

Issue Type

Others

onnx2tf version number

1.5.12

onnx version number

1.13.0

tensorflow version number

2.10.0

Download URL for ONNX

https://github.com/PINTO0309/onnx2tf/releases/download/1.1.28/fastestdet.onnx
https://github.com/PINTO0309/onnx2tf/releases/download/1.1.28/ppmattingv2_stdc1_human_480x640.onnx

Parameter Replacement JSON

N/A

Description

  1. Personal
  2. The absolute error of AveragePool is quite large. In addition, there is almost no error in other OPs.
    onnx2tf -i fastestdet.onnx -cotof -cotoa 1e-4
    
    image
    image
  3. At this time I do not know the cause of the larger error.
  4. I want to generate a tfite model with small accuracy degradation.

[ALIKE] Inference error for `MaxPool` with padding is very large

Issue Type

Others

onnx2tf version number

1.5.33

onnx version number

1.13.1

tensorflow version number

2.10.0

Download URL for ONNX

https://github.com/PINTO0309/onnx2tf/releases/download/1.1.28/alike_t_opset11_192x320.onnx

Parameter Replacement JSON

N/A

Description

  1. Personal
  2. Inference error for MaxPool with padding is very large. However, I believe a separate investigation is needed to determine if this is a problem with the logic used to verify accuracy or with MaxPool's padding process.
    https://github.com/PINTO0309/onnx2tf/releases/download/1.1.28/alike_t_opset11_192x320.onnx
    image
    image

[TODO] Approximate computation implementation of `Atan` / `Atan2`

Issue Type

Others

onnx2tf version number

1.5.x

onnx version number

1.13.1

tensorflow version number

2.10.0

Download URL for ONNX

N/A

Parameter Replacement JSON

N/A

Description

  1. Personal
  2. Approximate computation implementation of Atan / Atan2
  3. Refs:
    1. https://developer.download.nvidia.com/cg/atan.html
    2. https://developer.download.nvidia.com/cg/atan2.html
    3. https://zenn.dev/pinto0309/articles/8f6df1d2304395

[FastestDet] ValueError: Exception encountered when calling layer "tf.nn.convolution_4" (type TFOpLambda)

Issue Type

Others

onnx2tf version number

1.1.38

Download URL for ONNX

Parameter Replacement JSON

  • replace.json
    {
      "format_version": 1,
      "operations": [
        {
          "op_name": "Gather_18",
          "param_target": "outputs",
          "param_name": "onnx::Concat_445",
          "post_process_transpose_perm": [0,2,3,1]
        },
        {
          "op_name": "Gather_33",
          "param_target": "outputs",
          "param_name": "onnx::Concat_463",
          "post_process_transpose_perm": [0,2,3,1]
        },
        {
          "op_name": "Gather_48",
          "param_target": "outputs",
          "param_name": "onnx::Concat_481",
          "post_process_transpose_perm": [0,2,3,1]
        },
        {
          "op_name": "Gather_72",
          "param_target": "outputs",
          "param_name": "onnx::Concat_513",
          "post_process_transpose_perm": [0,2,3,1]
        },
        {
          "op_name": "Gather_87",
          "param_target": "outputs",
          "param_name": "onnx::Concat_531",
          "post_process_transpose_perm": [0,2,3,1]
        },
        {
          "op_name": "Gather_102",
          "param_target": "outputs",
          "param_name": "onnx::Concat_549",
          "post_process_transpose_perm": [0,2,3,1]
        },
        {
          "op_name": "Gather_117",
          "param_target": "outputs",
          "param_name": "onnx::Concat_567",
          "post_process_transpose_perm": [0,2,3,1]
        },
        {
          "op_name": "Gather_132",
          "param_target": "outputs",
          "param_name": "onnx::Concat_585",
          "post_process_transpose_perm": [0,2,3,1]
        },
        {
          "op_name": "Gather_147",
          "param_target": "outputs",
          "param_name": "onnx::Concat_603",
          "post_process_transpose_perm": [0,2,3,1]
        },
        {
          "op_name": "Gather_162",
          "param_target": "outputs",
          "param_name": "onnx::Concat_621",
          "post_process_transpose_perm": [0,2,3,1]
        },
        {
          "op_name": "Gather_186",
          "param_target": "outputs",
          "param_name": "onnx::Concat_653",
          "post_process_transpose_perm": [0,2,3,1]
        },
        {
          "op_name": "Gather_201",
          "param_target": "outputs",
          "param_name": "onnx::Concat_671",
          "post_process_transpose_perm": [0,2,3,1]
        },
        {
          "op_name": "Gather_216",
          "param_target": "outputs",
          "param_name": "onnx::Concat_689",
          "post_process_transpose_perm": [0,2,3,1]
        },
        {
          "op_name": "Transpose_261",
          "param_target": "attributes",
          "param_name": "perm",
          "values": [0,1,2,3]
        },
        {
          "op_name": "Transpose_263",
          "param_target": "attributes",
          "param_name": "perm",
          "values": [0,1,2,3]
        }
      ]
    }

Description

  1. Research
  2. Conversion error occured.
onnx2tf -i FastestDet.onnx
ERROR: The trace log is below.
Traceback (most recent call last):
  File "/home/b920405/git/onnx2tf/onnx2tf/utils/common_functions.py", line 262, in print_wrapper_func
    result = func(*args, **kwargs)
  File "/home/b920405/git/onnx2tf/onnx2tf/utils/common_functions.py", line 324, in inverted_operation_enable_disable_wrapper_func
    result = func(*args, **kwargs)
  File "/home/b920405/git/onnx2tf/onnx2tf/ops/Conv.py", line 153, in make_node
    tf.nn.convolution(
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py", line 153, in error_handler
    raise e.with_traceback(filtered_tb) from None
  File "/usr/local/lib/python3.8/dist-packages/keras/layers/core/tf_op_layer.py", line 119, in handle
    return TFOpLambda(op)(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 70, in error_handler
    raise e.with_traceback(filtered_tb) from None
ValueError: Exception encountered when calling layer "tf.nn.convolution_4" (type TFOpLambda).

Depth of input (44) is not a multiple of input depth of filter (24) for '{{node tf.nn.convolution_4/convolution}} = Conv2D[T=DT_FLOAT, data_format="NHWC", dilations=[1, 1, 1, 1], explicit_paddings=[], padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true](Placeholder, tf.nn.convolution_4/convolution_internal/filters)' with input shapes: [1,24,44,44], [1,1,24,24].
  1. Parameter Replacement JSON
  2. Research

[human_segmentation_pphumanseg] Conversion error in human_segmentation_pphumanseg_2021oct.onnx

Issue Type

Others

onnx2tf version number

1.1.11

Download URL for ONNX

https://github.com/opencv/opencv_zoo/tree/master/models/human_segmentation_pphumanseg

Parameter Replacement JSON

{
  "format_version": 1,
  "operations": [
    {
      "op_name": "Resize_0",
      "param_target": "inputs",
      "param_name": "Concat_0",
      "values": [48,48]
    },
    {
      "op_name": "Resize_1",
      "param_target": "inputs",
      "param_name": "Concat_1",
      "values": [48,48]
    },
    {
      "op_name": "Resize_2",
      "param_target": "inputs",
      "param_name": "Concat_2",
      "values": [48,48]
    },
    {
      "op_name": "Resize_3",
      "param_target": "inputs",
      "param_name": "Concat_3",
      "values": [24,24]
    },
    {
      "op_name": "Resize_4",
      "param_target": "inputs",
      "param_name": "Concat_4",
      "values": [48,48]
    },
    {
      "op_name": "Resize_5",
      "param_target": "inputs",
      "param_name": "Concat_5",
      "values": [48,48]
    },
    {
      "op_name": "Resize_6",
      "param_target": "inputs",
      "param_name": "Concat_6",
      "values": [48,48]
    },
    {
      "op_name": "Resize_7",
      "param_target": "inputs",
      "param_name": "Concat_7",
      "values": [24,24]
    },
    {
      "op_name": "Resize_8",
      "param_target": "inputs",
      "param_name": "Concat_8",
      "values": [24,24]
    },
    {
      "op_name": "Resize_9",
      "param_target": "inputs",
      "param_name": "Concat_9",
      "values": [12,12]
    },
    {
      "op_name": "Resize_10",
      "param_target": "inputs",
      "param_name": "Concat_10",
      "values": [48,48]
    },
    {
      "op_name": "Resize_11",
      "param_target": "inputs",
      "param_name": "Concat_11",
      "values": [48,48]
    },
    {
      "op_name": "Resize_12",
      "param_target": "inputs",
      "param_name": "Concat_12",
      "values": [48,48]
    },
    {
      "op_name": "Resize_13",
      "param_target": "inputs",
      "param_name": "Concat_14",
      "values": [192,192]
    },
    {
      "op_name": "Reshape_0",
      "param_target": "outputs",
      "param_name": "Reshape_0",
      "post_process_transpose_perm": [0,2,3,1]
    },
    {
      "op_name": "Reshape_1",
      "param_target": "outputs",
      "param_name": "Reshape_1",
      "post_process_transpose_perm": [0,2,3,1]
    },
    {
      "op_name": "Transpose_0",
      "param_target": "attributes",
      "param_name": "perm",
      "values": [0,1,2,3]
    },
    {
      "op_name": "Transpose_1",
      "param_target": "attributes",
      "param_name": "perm",
      "values": [0,1,2,3]
    },
    {
      "op_name": "Softmax_0",
      "param_target": "attributes",
      "param_name": "axis",
      "values": 3
    }
  ]
}

Description

  1. I get an error when I run onnxsim. Converting a model without running onnxsim results in an error.
  • Error message
    onnx2tf -i human_segmentation_pphumanseg_2021oct.onnx
    Traceback (most recent call last):
      File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
        return _run_code(code, main_globals, None,
      File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
        exec(code, run_globals)
      File "/home/xxxxx/.vscode/extensions/ms-python.python-2022.16.1/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/__main__.py", line 39, in <module>
        cli.main()
      File "/home/xxxxx/.vscode/extensions/ms-python.python-2022.16.1/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 430, in main
        run()
      File "/home/xxxxx/.vscode/extensions/ms-python.python-2022.16.1/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 284, in run_file
        runpy.run_path(target, run_name="__main__")
      File "/home/xxxxx/.vscode/extensions/ms-python.python-2022.16.1/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 321, in run_path
        return _run_module_code(code, init_globals, run_name,
      File "/home/xxxxx/.vscode/extensions/ms-python.python-2022.16.1/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 135, in _run_module_code
        _run_code(code, mod_globals, init_globals,
      File "/home/xxxxx/.vscode/extensions/ms-python.python-2022.16.1/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 124, in _run_code
        exec(code, run_globals)
      File "/home/xxxxx/git/onnx2tf/onnx2tf/onnx2tf.py", line 1142, in <module>
        main()
      File "/home/xxxxx/git/onnx2tf/onnx2tf/onnx2tf.py", line 1108, in main
        model = convert(
      File "/home/xxxxx/git/onnx2tf/onnx2tf/onnx2tf.py", line 558, in convert
        concrete_func = run_model.get_concrete_function(
      File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py", line 1239, in get_concrete_function
        concrete = self._get_concrete_function_garbage_collected(*args, **kwargs)
      File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py", line 1219, in _get_concrete_function_garbage_collected
        self._initialize(args, kwargs, add_initializers_to=initializers)
      File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py", line 785, in _initialize
        self._stateful_fn._get_concrete_function_internal_garbage_collected(  # pylint: disable=protected-access
      File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py", line 2523, in _get_concrete_function_internal_garbage_collected
        graph_function, _ = self._maybe_define_function(args, kwargs)
      File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py", line 2760, in _maybe_define_function
        graph_function = self._create_graph_function(args, kwargs)
      File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py", line 2670, in _create_graph_function
        func_graph_module.func_graph_from_py_func(
      File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py", line 1247, in func_graph_from_py_func
        func_outputs = python_func(*func_args, **func_kwargs)
      File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py", line 677, in wrapped_fn
        out = weak_wrapped_fn().__wrapped__(*args, **kwds)
      File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py", line 1233, in autograph_handler
        raise e.ag_error_metadata.to_exception(e)
      File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py", line 1222, in autograph_handler
        return autograph.converted_call(
      File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py", line 439, in converted_call
        result = converted_f(*effective_args, **kwargs)
      File "/tmp/__autograph_generated_fileldpw01_n.py", line 6, in <lambda>
        tf__lam = (lambda *inputs: ag__.with_function_scope((lambda lscope: ag__.converted_call(model, (inputs,), None, lscope)), 'lscope', ag__.ConversionOptions(recursive=True, user_requested=True, optional_features=(), internal_convert_user_code=True)))
      File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/core/function_wrappers.py", line 113, in with_function_scope
        return thunk(scope)
      File "/tmp/__autograph_generated_fileldpw01_n.py", line 6, in <lambda>
        tf__lam = (lambda *inputs: ag__.with_function_scope((lambda lscope: ag__.converted_call(model, (inputs,), None, lscope)), 'lscope', ag__.ConversionOptions(recursive=True, user_requested=True, optional_features=(), internal_convert_user_code=True)))
      File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py", line 377, in converted_call
        return _call_unconverted(f, args, kwargs, options)
      File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py", line 459, in _call_unconverted
        return f(*args)
      File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 70, in error_handler
        raise e.with_traceback(filtered_tb) from None
      File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1969, in _create_c_op
        raise ValueError(e.message)
    ValueError: in user code:
    
        File "/home/xxxxx/git/onnx2tf/onnx2tf/onnx2tf.py", line 557, in None  *
            lambda *inputs : model(inputs)
        File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 70, in error_handler  **
            raise e.with_traceback(filtered_tb) from None
    
        ValueError: Exception encountered when calling layer "tf.math.add_6" "                 f"(type TFOpLambda).
        
        Dimensions must be equal, but are 48 and 24 for '{{node model/tf.math.add_6/Add}} = AddV2[T=DT_FLOAT](model/tf.nn.relu_13/Relu, model/tf.image.resize/resize/ResizeBilinear)' with input shapes: [1,48,48,16], [1,24,48,16].
        
        Call arguments received by layer "tf.math.add_6" "                 f"(type TFOpLambda):
          • x=tf.Tensor(shape=(1, 48, 48, 16), dtype=float32)
          • y=tf.Tensor(shape=(1, 24, 48, 16), dtype=float32)
          • name='Add_5'
    
  1. To avoid conversion errors, write replace.json and run it.
    onnx2tf -i human_segmentation_pphumanseg_2021oct.onnx -prf replace.json

The final conversion was successful. https://github.com/PINTO0309/onnx2tf/releases/tag/1.1.12

The output dimensions of Maxpool2D do not correspond

Issue Type

Others

onnx2tf version number

1.3.6

onnx version number

1.12.0

tensorflow version number

2.10.0

Download URL for ONNX

INFO: onnx_op_type: MaxPool onnx_op_name: MaxPool_346
INFO: input_name.1: 645 shape: [1, 64, 22, 22] dtype: float32
INFO: output_name.1: 646 shape: [1, 64, 10, 10] dtype: float32
INFO: tf_op_type: max_pool_v2
INFO: input.1.input: name: tf.nn.relu_16/Relu:0 shape: (1, 64, 22, 22) dtype: <dtype: 'float32'>
INFO: input.2.filters:
INFO: input.3.kernel_shape: val: [3, 3]
INFO: input.4.strides: val: [2, 2]
INFO: input.5.dilations: val: [1, 1]
INFO: input.6.padding: val: VALID
INFO: input.7.ceil_mode: val: False
INFO: output.1.output: name: tf.nn.max_pool2d_5/MaxPool2d:0 shape: (1, 31, 10, 22) dtype: <dtype: 'float32'>

Parameter Replacement JSON

{
  "format_version": 1,
  "operations": [
    {
      "op_name": "Reshape_54",
      "param_target": "outputs",
      "param_name": "173",
      "post_process_transpose_perm": [
        0,
        2,
        3,
        1
      ]
    },
    {
      "op_name": "Reshape_73",
      "param_target": "outputs",
      "param_name": "205",
      "post_process_transpose_perm": [
        0,
        2,
        3,
        1
      ]
    },
    {
      "op_name": "Reshape_92",
      "param_target": "outputs",
      "param_name": "237",
      "post_process_transpose_perm": [
        0,
        2,
        3,
        1
      ]
    },
    {
      "op_name": "Relu_306",
      "param_target": "outputs",
      "param_name": "576",
      "post_process_transpose_perm": [
        0,
        2,
        3,
        1
      ]
    },
    {
      "op_name": "Relu_150",
      "param_target": "outputs",
      "param_name": "320",
      "post_process_transpose_perm": [
        0,
        2,
        3,
        1
      ]
    }
  ]
}

Description

In onnx model, the output of MaxPool2D is [1, 64, 10, 10], but onnx2tf output (1, 31, 10, 22). I don't know what went wrong. Maybe should I transpose the input of MaxPool2D?

[human_segmentation_pphumanseg_2021oct] `-cotof` option is used, Validation of 1D tensor always results in `Unmatched`

Issue Type

Others

onnx2tf version number

1.5.8

onnx version number

1.13.0

tensorflow version number

2.10.0

Download URL for ONNX

https://github.com/PINTO0309/onnx2tf/releases/download/1.1.27/human_segmentation_pphumanseg_2021oct.onnx

Parameter Replacement JSON

{
  "format_version": 1,
  "operations": [
    {
      "op_name": "Resize_0",
      "param_target": "inputs",
      "param_name": "Concat_0",
      "values": [48,48]
    },
    {
      "op_name": "Resize_1",
      "param_target": "inputs",
      "param_name": "Concat_1",
      "values": [48,48]
    },
    {
      "op_name": "Resize_2",
      "param_target": "inputs",
      "param_name": "Concat_2",
      "values": [48,48]
    },
    {
      "op_name": "Resize_3",
      "param_target": "inputs",
      "param_name": "Concat_3",
      "values": [24,24]
    },
    {
      "op_name": "Resize_4",
      "param_target": "inputs",
      "param_name": "Concat_4",
      "values": [48,48]
    },
    {
      "op_name": "Resize_5",
      "param_target": "inputs",
      "param_name": "Concat_5",
      "values": [48,48]
    },
    {
      "op_name": "Resize_6",
      "param_target": "inputs",
      "param_name": "Concat_6",
      "values": [48,48]
    },
    {
      "op_name": "Resize_7",
      "param_target": "inputs",
      "param_name": "Concat_7",
      "values": [24,24]
    },
    {
      "op_name": "Resize_8",
      "param_target": "inputs",
      "param_name": "Concat_8",
      "values": [24,24]
    },
    {
      "op_name": "Resize_9",
      "param_target": "inputs",
      "param_name": "Concat_9",
      "values": [12,12]
    },
    {
      "op_name": "Resize_10",
      "param_target": "inputs",
      "param_name": "Concat_10",
      "values": [48,48]
    },
    {
      "op_name": "Resize_11",
      "param_target": "inputs",
      "param_name": "Concat_11",
      "values": [48,48]
    },
    {
      "op_name": "Resize_12",
      "param_target": "inputs",
      "param_name": "Concat_12",
      "values": [48,48]
    },
    {
      "op_name": "Resize_13",
      "param_target": "inputs",
      "param_name": "Concat_14",
      "values": [192,192]
    },
    {
      "op_name": "Reshape_0",
      "param_target": "outputs",
      "param_name": "Reshape_0",
      "post_process_transpose_perm": [0,2,3,1]
    },
    {
      "op_name": "Reshape_1",
      "param_target": "outputs",
      "param_name": "Reshape_1",
      "post_process_transpose_perm": [0,2,3,1]
    },
    {
      "op_name": "Transpose_0",
      "param_target": "attributes",
      "param_name": "perm",
      "values": [0,1,2,3]
    },
    {
      "op_name": "Transpose_1",
      "param_target": "attributes",
      "param_name": "perm",
      "values": [0,1,2,3]
    },
    {
      "op_name": "Softmax_0",
      "param_target": "attributes",
      "param_name": "axis",
      "values": 3
    }
  ]
}

Description

Validation of 1D tensor always results in Unmatch. However, no inference error is introduced. Therefore, there is no problem with the behavior of the model, but only with the accuracy check decision.

ONNX and TF output value validation started =========================================
INFO: validation_conditions: np.allclose(onnx_outputs, tf_outputs, rtol=0.0, atol=1.0, equal_nan=True)
INFO: onnx_output_name: Shape_20 shape: (4,) dtype: int64 validate_result:  Unmatched 
INFO: onnx_output_name: conv2d_73.tmp_0 shape: (1, 64, 96, 96) dtype: float32 validate_result:  Matches 
INFO: onnx_output_name: shape_10.tmp_0 shape: (4,) dtype: int32 validate_result:  Unmatched 
INFO: onnx_output_name: batch_norm_0.tmp_2 shape: (1, 64, 96, 96) dtype: float32 validate_result:  Matches 
INFO: onnx_output_name: shape_10.tmp_0_slice_0 shape: (2,) dtype: int32 validate_result:  Unmatched 
INFO: onnx_output_name: relu_0.tmp_0 shape: (1, 64, 96, 96) dtype: float32 validate_result:  Matches 
INFO: onnx_output_name: Cast_21 shape: (2,) dtype: int64 validate_result:  Unmatched 
INFO: onnx_output_name: conv2d_74.tmp_0 shape: (1, 64, 48, 48) dtype: float32 validate_result:  Matches 
INFO: onnx_output_name: batch_norm_1.tmp_2 shape: (1, 64, 48, 48) dtype: float32 validate_result:  Matches 
INFO: onnx_output_name: relu_1.tmp_0 shape: (1, 64, 48, 48) dtype: float32 validate_result:  Matches 
INFO: onnx_output_name: conv2d_75.tmp_0 shape: (1, 32, 48, 48) dtype: float32 validate_result:  Matches 
INFO: onnx_output_name: conv2d_78.tmp_0 shape: (1, 128, 48, 48) dtype: float32 validate_result:  Matches 

image

[YOLOX] Abort onnxruntime when `keepdims=False` in `ReduceXX` and the final tensor dimension is reduced to zero

Issue Type

Others

onnx2tf version number

1.5.0

onnx version number

1.13.0

tensorflow version number

2.10.0

Download URL for ONNX

YOLOX-X
https://drive.google.com/file/d/1QQLf83JwLI_EoQFU5xOGKUvtFPsFSWCN/view

Parameter Replacement JSON

N/A

Description

Ref issue: [YoloX] output differs between onnx and tflite #107

Problems that can occur in most use cases, such as when post-processing results in zero cases. This is not an onnx2tf problem, but rather an onnxruntime specification problem.
image

ONNX and TF output value validation started =========================================
INFO: validation_conditions: np.allclose(onnx_outputs, tf_outputs, rtol=1e-05, atol=1e-05, equal_nan=True)
2023-01-10 14:18:07.353720422 [E:onnxruntime:, sequential_executor.cc:369 Execute] Non-zero status code returned while running ReduceMax node. Name:'ReduceMax_686' Status Message: /home/b920405/work/onnxruntime/onnxruntime/core/providers/cpu/reduction/reduction_ops.cc:763 void onnxruntime::ValidateKeepDims(const onnxruntime::TensorShape&, int64_t) keepdims was false. Can't reduce on dim with value of 0 if 'keepdims' is false. Invalid output shape would be produced. input_shape:{0,4}

[E] No function: __iter__ registered for opset: 11
Traceback (most recent call last):
  File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/home/b920405/.vscode/extensions/ms-python.python-2022.20.1/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/__main__.py", line 39, in <module>
    cli.main()
  File "/home/b920405/.vscode/extensions/ms-python.python-2022.20.1/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 430, in main
    run()
  File "/home/b920405/.vscode/extensions/ms-python.python-2022.20.1/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 284, in run_file
    runpy.run_path(target, run_name="__main__")
  File "/home/b920405/.vscode/extensions/ms-python.python-2022.20.1/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 321, in run_path
    return _run_module_code(code, init_globals, run_name,
  File "/home/b920405/.vscode/extensions/ms-python.python-2022.20.1/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 135, in _run_module_code
    _run_code(code, mod_globals, init_globals,
  File "/home/b920405/.vscode/extensions/ms-python.python-2022.20.1/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 124, in _run_code
    exec(code, run_globals)
  File "/home/b920405/git/onnx2tf/onnx2tf/onnx2tf.py", line 1521, in <module>
    main()
  File "/home/b920405/git/onnx2tf/onnx2tf/onnx2tf.py", line 1475, in main
    model = convert(
  File "/home/b920405/git/onnx2tf/onnx2tf/onnx2tf.py", line 974, in convert
    dummy_onnx_outputs: List[np.ndarray] = dummy_onnx_inference(
  File "/home/b920405/git/onnx2tf/onnx2tf/utils/common_functions.py", line 2832, in dummy_onnx_inference
    outputs = onnx_session.run(None, dummy_datas)
  File "/usr/local/lib/python3.8/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 200, in run
    return self._sess.run(output_names, input_feed, run_options)
onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running ReduceMax node. Name:'ReduceMax_686' Status Message: /home/b920405/work/onnxruntime/onnxruntime/core/providers/cpu/reduction/reduction_ops.cc:763 void onnxruntime::ValidateKeepDims(const onnxruntime::TensorShape&, int64_t) keepdims was false. Can't reduce on dim with value of 0 if 'keepdims' is false. Invalid output shape would be produced. input_shape:{0,4}
  1. If keepdims=False, modify as follows to slice the corresponding dimension after performing ReduceXX with keepdims=True and keeping the dimension. https://github.com/PINTO0309/PINTO_model_zoo/blob/efe3e8da69e1ef14014430b1c88c790976ec10ef/342_ALIKE/PINTO_special/make_mnn_matcher.py#L23-L24
  2. Make the image np.ndarray be input as dummy data when --check_onnx_tf_outputs_elementwise_close is executed. sample: https://github.com/PINTO0309/onnx2tf/releases/download/1.0.49/calibration_image_sample_data_20x128x128x3_float32.npy

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.