Comments (8)
Hello,
TF accepts both outputs shapes as valid: [1, 66, 226, 16] and [1, 65, 226, 16]
input = tf.constant(1, shape=[1, 32, 224, 16]) # NHWCin
kernel = tf.constant(2, shape=[3, 3, 16, 16]) # HWCoutCin
x = tf.nn.conv2d_transpose(input,
kernel,
output_shape=[1, 66, 226, 16], #NHWCout
strides=(2, 1),
padding="VALID",
data_format='NHWC',
dilations=None,
name=None)
x.shape
input = tf.constant(1, shape=[1, 32, 224, 16]) # NHWCin
kernel = tf.constant(2, shape=[3, 3, 16, 16]) # HWCoutCin
x = tf.nn.conv2d_transpose(input,
kernel,
output_shape=[1, 65, 226, 16], #NHWCout
strides=(2, 1),
padding="VALID",
data_format='NHWC',
dilations=None,
name=None)
x.shape
In ArmNN we calculate the output this way:
Out = ( In - 1 ) * stride + filter - padding.
65 = ( 32 - 1 ) * 2 + 3 - 0
I am trying to see how we can support both output shapes, as TF does.
from armnn.
Hi @riestmo-nxp ,
if you want to attach the model that would help us to debug the issue.
from armnn.
Hi @TeresaARM,
sure, that's the model: model.zip
from armnn.
It seems that inferShape erro happened when Convert this model to armnn ir. And TfliteParser maybe not consider that VALD Padding mode.
This mode datalayout is NHWC or NCHW?
about TransposeConv padding mode :https://www.tensorflow.org/api_docs/python/tf/nn/conv2d_transpose
from armnn.
Hi @xiaotongnii,
the data format is NHWC (as usual in TFLite).
from armnn.
Hi,@riestmo-nxp Convert model.tflite to model.armnn and everythins is OK!
./ArmnnConverter -f tflite-binary -m model.tflite -i serving_default_input:0 -s 1,512,256,32 -o PartitionedCall:0 -p model.armnn
check your armnn version pls!
from armnn.
from armnn.
Tflite padding is more complex than what we have in Arm NN. Take a look at:
https://github.com/tensorflow/tensorflow/blob/be8160ab0a925311041f7a53b7c7c148a3e51f35/tensorflow/lite/delegates/xnnpack/xnnpack_delegate.cc#L1206
In the VALID
case (one my test was hitting the error with your model), the calculations are:
*padding_top = *padding_bottom = *padding_left = *padding_right = 0;
*adjustment_height = (output_height - kernel_height) % stride_height;
*adjustment_width = (output_width - kernel_width) % stride_width;
The CalcPadding()
code inserted above reflects the first line. The difference occurs with the adjustment calculations. From the tflite code:
/// @param adjustment_height - additional elements in the bottom of the 2D output data.
/// @param adjustment_width - additional elements to the right of the 2D output data.
In the case that is failing with the provided model, the adjustment_height = 1 # (64 - 3) % 2 , adjustment_width = 0
We use the TransposeConv2d
from the Compute Library and I could not find support for the adjustment parameters unfortunately and this limits the type of padding we can have in Arm NN.
It's a complex problem so please let me know if you think I might have missed something. Thanks.
from armnn.
Related Issues (20)
- BUG: using delegate with transformer | AttributeError: 'NoneType' object has no attribute 'c_void_p' HOT 22
- About the configuration setting of externalMemoryManagementEnabled HOT 1
- The mali gpu computation capability is only used no more than 15% when inference resnet50 with ArmNN HOT 1
- onnx deployment fail HOT 7
- Another error on another model and by now it's two out of three :( HOT 22
- Unsupported Operation "Transpose" in armNN::OnnxParser while loading the onnx model file (in the goal to run inference) HOT 8
- Build Issues -Werrors on armbian 24 HOT 1
- nvalid attempt to construct ConstTensor from non-constant TensorInfo HOT 1
- Profiler service warnings seen in ArmNN v24.02 HOT 3
- Does ExecuteNetwork support "GpuAcc" runtime ? HOT 12
- Running YOLOv5 ONNX model with onnx parser of armnn fails with unsupported operation HOT 2
- Crash when support ArmNN AIDL backend based on a shim over the NNAPI Support Library
- Crash when support ArmNN AIDL backend based on a shim over the NNAPI Support Library HOT 2
- Unitests failed HOT 3
- understanding the onnx parser HOT 2
- Whisper tflite doesn't work HOT 4
- Error compiling `OnnxMnist-Armnn.cpp` HOT 2
- How to set priority of the application running on GPU? HOT 2
- Onnx Parser: Function AddConvLayerWithDepthwiseConv with bias has logical error HOT 1
- memory alignment HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from armnn.