Comments (3)
Unfortunately, there is always an error of about 1e-4 due to differences in run-time calculation specifications.
If you can't tolerate that minor error, don't use tflite.
from onnx2tf.
Unfortunately, there is always an error of about 1e-4 due to differences in run-time calculation specifications.
If you can't tolerate that minor error, don't use tflite.
Thanks :)
Sorry for bothering you but there is a discrepancy between the outputs of the TensorFlow model and the ONNX model.
Note that the outputs of TensorFlow Lite and TensorFlow models match.
Here's the new poc file
poc.zip
from os.path import join
import tensorflow
import onnxruntime
import numpy as np
import onnx2tf
from einops import rearrange
if __name__ == "__main__" :
onnx_model_path = 'poc.onnx'
tf_output_path = './tf_path'
# Convert ONNX model into TensorFlow
onnx2tf.convert(
input_onnx_file_path=onnx_model_path,
output_folder_path=tf_output_path,
copy_onnx_input_output_names_to_tflite=True,
non_verbose=True,
)
# input
input_np = np.random.randn(1, 3, 224, 224).astype('f')
# load and run onnx model
ort_session = onnxruntime.InferenceSession(onnx_model_path)
ort_output = ort_session.run(None, {'x' : input_np})
# Prepare input for TensorFlow models
input_for_tf = rearrange(input_np, 'b c h w -> b h w c')
interpreter = tensorflow.lite.Interpreter(join(tf_output_path, 'poc_float32.tflite'))
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
interpreter.allocate_tensors()
interpreter.set_tensor(input_details[0]['index'], input_for_tf)
interpreter.invoke()
output_data = interpreter.get_tensor(output_details[0]['index'])
#print (output_data)
# Load and run TensorFlow model
tf_model = tensorflow.saved_model.load(tf_output_path)
tf_output = tf_model(input_for_tf)
# Compare ONNX and TensorFlow outputs
if np.allclose(ort_output, tf_output, rtol=1e-03, atol=1e-04):
print("Test Passed: ONNX and TensorFlow outputs match\n")
else:
print("Test Failed: ONNX and TensorFlow outputs differ\n")
# Compare TFlite and TensorFlow outputs
if np.allclose(output_data, tf_output, rtol=1e-03, atol=1e-04):
print("Test Passed: TFlite and TensorFlow outputs match\n")
else:
print("Test Failed: TFlite and TensorFlow outputs differ\n")
from onnx2tf.
Sorry, but I don't understand the point you are trying to make. The final outputs of ONNX and TFLite match.
pip show onnx2tf
Name: onnx2tf
Version: 1.19.15
onnx2tf -i poc.onnx -cotof
from onnx2tf.
Related Issues (20)
- Unable to replace flatten HOT 4
- How to convert onnx(NCHW) to NHWC and write back to onnx file Because my HW takes onnx file as input in NHWC format HOT 3
- How to convert onnx(NCHW) to NHWC(without adding transpose) and write back to onnx file HOT 1
- How to convert onnx(NCHW) to NHWC(without adding transpose) and write back to onnx file HOT 3
- Please do reply for the last time either this issue or @ How to convert onnx(NCHW) to NHWC(without adding transpose) and write back to onnx file #586 HOT 1
- ONNX Split operation converts to a tf StridedSlice operation with outputs in the wrong order. HOT 2
- Depth_Anything to ONNX model's conversion failed HOT 1
- Bad accuracy after conversion HOT 1
- onnx2tf conversion error for gpt2 transformers model HOT 2
- onnx2tf conversion error for gpt2 transformers model #591 follow up HOT 2
- Error when trying to run converted model; gpt2 tensorflow from onnx #591 follow up HOT 1
- Modifying Multi-output Layer index when Exporting onnx to tflite HOT 1
- Model modification issue HOT 2
- Onnx to tflite conversion issue on TILE layer.
- While converting onnx to tflite TILE layer is getting ignored. HOT 2
- [BUG] Celu: discrepancies between the outputs of the ONNX model and converted TensorFlow models. HOT 1
- Resize operation fails (['unk__0', 'unk__1', 'unk__2', 'unk__3']) and raises UnboundLocalError: local variable 'new_size' referenced before assignment HOT 20
- FULL INTEGER QUANTIZED MODEL (INT8) infers always the same value HOT 2
- Error performing quantization-aware training (QAT) with keras onnx2tf generated model HOT 4
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from onnx2tf.