Comments (2)
onnx2tf -i decoder_model.onnx -b 1 -osd
:
:
INFO: 1613 / 1613
INFO: onnx_op_type: MatMul onnx_op_name: /lm_head/MatMul
INFO: input_name.1: /transformer/Reshape_3_output_0 shape: None dtype: float32
INFO: input_name.2: onnx::MatMul_3718 shape: [768, 50257] dtype: float32
INFO: output_name.1: logits shape: ['batch_size', 'sequence_length', 50257] dtype: float32
INFO: tf_op_type: matmul
INFO: input.1.a: name: tf.reshape_367/Reshape:0 shape: (None, None, None) dtype: <dtype: 'float32'>
INFO: input.2.b: shape: (768, 50257) dtype: float32
INFO: input.3.output_type: name: float32 shape: ()
INFO: output.1.output: name: tf.linalg.matmul_72/MatMul:0 shape: (None, None, 50257) dtype: <dtype: 'float32'>
saved_model output started ==========================================================
saved_model output complete!
loc(fused["SelectV2:", callsite("model_39/tf.where/SelectV2"("/home/b920405/.local/lib/python3.10/site-packages/tensorflow/python/eager/polymorphic_function/tracing_compilation.py":284:1) at callsite("/home/b920405/.local/lib/python3.10/site-packages/tensorflow/python/eager/polymorphic_function/tracing_compilation.py":308:1 at callsite("/home/b920405/.local/lib/python3.10/site-packages/tensorflow/python/framework/func_graph.py":1059:1 at callsite("/home/b920405/.local/lib/python3.10/site-packages/tensorflow/python/eager/polymorphic_function/polymorphic_function.py":597:1 at callsite("/home/b920405/.local/lib/python3.10/site-packages/tensorflow/python/eager/polymorphic_function/autograph_util.py":41:1 at callsite("/home/b920405/.local/lib/python3.10/site-packages/onnx2tf/onnx2tf.py":1162:1 at callsite("/home/b920405/.local/lib/python3.10/site-packages/tensorflow/python/autograph/core/function_wrappers.py":113:1 at callsite("/home/b920405/.local/lib/python3.10/site-packages/onnx2tf/onnx2tf.py":1162:1 at callsite("/home/b920405/.local/lib/python3.10/site-packages/keras/src/utils/traceback_utils.py":65:1 at "/home/b920405/.local/lib/python3.10/site-packages/keras/src/engine/training.py":589:1)))))))))]): error: 'tf.SelectV2' op operands don't have broadcast-compatible shapes
Traceback (most recent call last):
File "/home/b920405/.local/bin/onnx2tf", line 8, in <module>
sys.exit(main())
File "/home/b920405/.local/lib/python3.10/site-packages/onnx2tf/onnx2tf.py", line 2326, in main
model = convert(
File "/home/b920405/.local/lib/python3.10/site-packages/onnx2tf/onnx2tf.py", line 1250, in convert
tflite_model = converter.convert()
File "/home/b920405/.local/lib/python3.10/site-packages/tensorflow/lite/python/lite.py", line 2171, in convert
return super(TFLiteConverterV2, self).convert()
File "/home/b920405/.local/lib/python3.10/site-packages/tensorflow/lite/python/lite.py", line 1125, in wrapper
return self._convert_and_export_metrics(convert_func, *args, **kwargs)
File "/home/b920405/.local/lib/python3.10/site-packages/tensorflow/lite/python/lite.py", line 1079, in _convert_and_export_metrics
result = convert_func(self, *args, **kwargs)
File "/home/b920405/.local/lib/python3.10/site-packages/tensorflow/lite/python/lite.py", line 1778, in convert
return super(TFLiteFrozenGraphConverterV2, self).convert(
File "/home/b920405/.local/lib/python3.10/site-packages/tensorflow/lite/python/lite.py", line 1357, in convert
result = _convert_graphdef(
File "/home/b920405/.local/lib/python3.10/site-packages/tensorflow/lite/python/convert_phase.py", line 212, in wrapper
raise converter_error from None # Re-throws the exception.
File "/home/b920405/.local/lib/python3.10/site-packages/tensorflow/lite/python/convert_phase.py", line 205, in wrapper
return func(*args, **kwargs)
File "/home/b920405/.local/lib/python3.10/site-packages/tensorflow/lite/python/convert.py", line 978, in convert_graphdef
data = convert(
File "/home/b920405/.local/lib/python3.10/site-packages/tensorflow/lite/python/convert.py", line 366, in convert
raise converter_error
tensorflow.lite.python.convert_phase.ConverterError: /home/b920405/.local/lib/python3.10/site-packages/tensorflow/python/eager/polymorphic_function/tracing_compilation.py:284:1: error: 'tf.SelectV2' op operands don't have broadcast-compatible shapes
concrete_function = _create_concrete_function(
^
/home/b920405/.local/lib/python3.10/site-packages/tensorflow/python/eager/polymorphic_function/tracing_compilation.py:308:1: note: called from
traced_func_graph = func_graph_module.func_graph_from_py_func(
^
/home/b920405/.local/lib/python3.10/site-packages/tensorflow/python/framework/func_graph.py:1059:1: note: called from
func_outputs = python_func(*func_args, **func_kwargs)
^
/home/b920405/.local/lib/python3.10/site-packages/tensorflow/python/eager/polymorphic_function/polymorphic_function.py:597:1: note: called from
out = weak_wrapped_fn().__wrapped__(*args, **kwds)
^
/home/b920405/.local/lib/python3.10/site-packages/tensorflow/python/eager/polymorphic_function/autograph_util.py:41:1: note: called from
return api.converted_call(
^
/home/b920405/.local/lib/python3.10/site-packages/onnx2tf/onnx2tf.py:1162:1: note: called from
run_model = tf.function(lambda *inputs : model(inputs))
^
/home/b920405/.local/lib/python3.10/site-packages/tensorflow/python/autograph/core/function_wrappers.py:113:1: note: called from
return thunk(scope)
^
/home/b920405/.local/lib/python3.10/site-packages/onnx2tf/onnx2tf.py:1162:1: note: called from
run_model = tf.function(lambda *inputs : model(inputs))
^
/home/b920405/.local/lib/python3.10/site-packages/keras/src/utils/traceback_utils.py:65:1: note: called from
return fn(*args, **kwargs)
^
/home/b920405/.local/lib/python3.10/site-packages/keras/src/engine/training.py:589:1: note: called from
return super().__call__(*args, **kwargs)
^
saved_model_cli show --dir saved_model/ --all
MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:
signature_def['__saved_model_init_op']:
The given SavedModel SignatureDef contains the following input(s):
The given SavedModel SignatureDef contains the following output(s):
outputs['__saved_model_init_op'] tensor_info:
dtype: DT_INVALID
shape: unknown_rank
name: NoOp
Method name is:
signature_def['serving_default']:
The given SavedModel SignatureDef contains the following input(s):
inputs['attention_mask'] tensor_info:
dtype: DT_INT64
shape: (1, -1)
name: serving_default_attention_mask:0
inputs['input_ids'] tensor_info:
dtype: DT_INT64
shape: (1, -1)
name: serving_default_input_ids:0
The given SavedModel SignatureDef contains the following output(s):
outputs['logits'] tensor_info:
dtype: DT_FLOAT
shape: (-1, -1, 50257)
name: PartitionedCall:0
outputs['present.0.key'] tensor_info:
dtype: DT_FLOAT
shape: (-1, -1, 64, 12)
name: PartitionedCall:1
outputs['present.0.value'] tensor_info:
dtype: DT_FLOAT
shape: (-1, -1, 64, 12)
name: PartitionedCall:2
outputs['present.1.key'] tensor_info:
dtype: DT_FLOAT
shape: (-1, -1, 64, 12)
name: PartitionedCall:3
outputs['present.1.value'] tensor_info:
dtype: DT_FLOAT
shape: (-1, -1, 64, 12)
name: PartitionedCall:4
outputs['present.10.key'] tensor_info:
dtype: DT_FLOAT
shape: (-1, -1, 64, 12)
name: PartitionedCall:5
outputs['present.10.value'] tensor_info:
dtype: DT_FLOAT
shape: (-1, -1, 64, 12)
name: PartitionedCall:6
outputs['present.11.key'] tensor_info:
dtype: DT_FLOAT
shape: (-1, -1, 64, 12)
name: PartitionedCall:7
outputs['present.11.value'] tensor_info:
dtype: DT_FLOAT
shape: (-1, -1, 64, 12)
name: PartitionedCall:8
outputs['present.2.key'] tensor_info:
dtype: DT_FLOAT
shape: (-1, -1, 64, 12)
name: PartitionedCall:9
outputs['present.2.value'] tensor_info:
dtype: DT_FLOAT
shape: (-1, -1, 64, 12)
name: PartitionedCall:10
outputs['present.3.key'] tensor_info:
dtype: DT_FLOAT
shape: (-1, -1, 64, 12)
name: PartitionedCall:11
outputs['present.3.value'] tensor_info:
dtype: DT_FLOAT
shape: (-1, -1, 64, 12)
name: PartitionedCall:12
outputs['present.4.key'] tensor_info:
dtype: DT_FLOAT
shape: (-1, -1, 64, 12)
name: PartitionedCall:13
outputs['present.4.value'] tensor_info:
dtype: DT_FLOAT
shape: (-1, -1, 64, 12)
name: PartitionedCall:14
outputs['present.5.key'] tensor_info:
dtype: DT_FLOAT
shape: (-1, -1, 64, 12)
name: PartitionedCall:15
outputs['present.5.value'] tensor_info:
dtype: DT_FLOAT
shape: (-1, -1, 64, 12)
name: PartitionedCall:16
outputs['present.6.key'] tensor_info:
dtype: DT_FLOAT
shape: (-1, -1, 64, 12)
name: PartitionedCall:17
outputs['present.6.value'] tensor_info:
dtype: DT_FLOAT
shape: (-1, -1, 64, 12)
name: PartitionedCall:18
outputs['present.7.key'] tensor_info:
dtype: DT_FLOAT
shape: (-1, -1, 64, 12)
name: PartitionedCall:19
outputs['present.7.value'] tensor_info:
dtype: DT_FLOAT
shape: (-1, -1, 64, 12)
name: PartitionedCall:20
outputs['present.8.key'] tensor_info:
dtype: DT_FLOAT
shape: (-1, -1, 64, 12)
name: PartitionedCall:21
outputs['present.8.value'] tensor_info:
dtype: DT_FLOAT
shape: (-1, -1, 64, 12)
name: PartitionedCall:22
outputs['present.9.key'] tensor_info:
dtype: DT_FLOAT
shape: (-1, -1, 64, 12)
name: PartitionedCall:23
outputs['present.9.value'] tensor_info:
dtype: DT_FLOAT
shape: (-1, -1, 64, 12)
name: PartitionedCall:24
Method name is: tensorflow/serving/predict
The MetaGraph with tag set ['serve'] contains the following ops: {'Fill', 'Range', 'RealDiv', 'Cast', 'Mul', 'TensorScatterUpdate', 'SplitV', 'Transpose', 'BatchMatMulV2', 'StaticRegexFullMatch', 'ConcatV2', 'Sub', 'ExpandDims', 'Pack', 'MergeV2Checkpoints', 'Squeeze', 'Identity', 'Less', 'SelectV2', 'Softmax', 'Const', 'RestoreV2', 'SaveV2', 'Mean', 'GatherV2', 'ShardedFilename', 'Shape', 'PartitionedCall', 'Placeholder', 'StatefulPartitionedCall', 'Sqrt', 'StringJoin', 'StridedSlice', 'Select', 'NoOp', 'AddV2', 'MatMul', 'Reshape', 'Tanh'}
from onnx2tf.
hi @PINTO0309 thank you for the prompt response!
When running !onnx2tf -i model.onnx -b 1 -osd
I get the same error as before
ValueError: A KerasTensor cannot be used as input to a TensorFlow function. A KerasTensor is a symbolic placeholder for a shape and dtype, used when constructing Keras Functional models or Keras Functions. You can only use it as input to a Keras layer or a Keras operation (from the namespaces `keras.layers` and `keras.operations`). You are likely doing something like:
ERROR: input_onnx_file_path: model.onnx
ERROR: onnx_op_name: wa/transformer/Shape
ERROR: Read this and deal with it. https://github.com/PINTO0309/onnx2tf#parameter-replacement
Can you share your environment settings? Maybe you're using a different onnx2tf/library version
from onnx2tf.
Related Issues (20)
- The tflite detection result converted from the yolo NAS model is inaccurate. HOT 5
- Converting quantized onnx with Q/DQ representation to full int8 TFLite model HOT 2
- Unable to replace flatten HOT 4
- How to convert onnx(NCHW) to NHWC and write back to onnx file Because my HW takes onnx file as input in NHWC format HOT 3
- How to convert onnx(NCHW) to NHWC(without adding transpose) and write back to onnx file HOT 1
- How to convert onnx(NCHW) to NHWC(without adding transpose) and write back to onnx file HOT 3
- Please do reply for the last time either this issue or @ How to convert onnx(NCHW) to NHWC(without adding transpose) and write back to onnx file #586 HOT 1
- ONNX Split operation converts to a tf StridedSlice operation with outputs in the wrong order. HOT 2
- Depth_Anything to ONNX model's conversion failed HOT 1
- Bad accuracy after conversion HOT 1
- onnx2tf conversion error for gpt2 transformers model #591 follow up HOT 2
- Error when trying to run converted model; gpt2 tensorflow from onnx #591 follow up HOT 1
- Modifying Multi-output Layer index when Exporting onnx to tflite HOT 1
- Model modification issue HOT 2
- Onnx to tflite conversion issue on TILE layer.
- While converting onnx to tflite TILE layer is getting ignored. HOT 2
- [BUG] Celu: discrepancies between the outputs of the ONNX model and converted TensorFlow models. HOT 1
- Discrepancies between Conv layer outputs of the ONNX model and converted TensorFlow models HOT 3
- Resize operation fails (['unk__0', 'unk__1', 'unk__2', 'unk__3']) and raises UnboundLocalError: local variable 'new_size' referenced before assignment HOT 20
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from onnx2tf.