Comments (12)
The PR is already closed but I want to share my experience in case someone has the same problem.
I faced the same issue and after a long time debugging I noticed my model expects an image type TfLiteType.float32
.
If you load the image using the static method fromImage
, it will create a TensorImage type TfLiteType.uint8
.
// _inputImage type TfLiteType.uint8
_inputImage = TensorImage.fromImage(image);
You should create the TensorImage object and load your image using the loadImage
:
// _inputImage type TfLiteType.float32
_inputImage = TensorImage(TfLiteType.float32);
_inputImage.loadImage(image);
from tflite_flutter_plugin.
The problem in my case was that it did not correctly declare the outputs of my models, each model has different shapes of tensors, I put a sample of my code at the end
TensorImage tensorImage = TensorImage.fromFile(file);
tensorImage = _imageProcessor.process(tensorImage);
TensorBuffer output0 = TensorBuffer.createFixedSize(
_interpreter.getOutputTensor(0).shape,
_interpreter.getOutputTensor(0).type);
TensorBuffer output1 = TensorBuffer.createFixedSize(
_interpreter.getOutputTensor(1).shape,
_interpreter.getOutputTensor(1).type);
Map<int, ByteBuffer> outputs = {0: output0.buffer, 1: output1.buffer};
_interpreter.runForMultipleInputs([tensorImage.buffer], outputs);
List<double> regression = output0.getDoubleList();
List<double> classificators = output1.getDoubleList();
from tflite_flutter_plugin.
The list of integers with length 57344 (Uint8List) is the bytes representation of the float data.
There are multiple ways to get the data in float form. Ideally, you should be using runForMultipleInputs like this,
TensorBuffer output0 = TensorBuffer.createFixedSize(interpreter.getOutputTensor(0).shape, interpreter.getOutputTensor(0).type);
TensorBuffer output1 = TensorBuffer.createFixedSize(interpreter.getOutputTensor(1).shape, interpreter.getOutputTensor(1).type);
Map<int, ByteBuffer> outputs = {0: output0.buffer, 1: output1.buffer};
interpreter.runForMultipleInputs([tensorimage.buffer], outputs);
List<double> regression = output0.getDoubleList();
You can also create a TensorBufferFloat
and then use loadBuffer(regressors.buffer, shape:interpreter.getOutputTensor(0).shape)
and then use getDoubleList()
.
Hope this helps.
from tflite_flutter_plugin.
Hi, @KiruthikaAdhi.
TensorBuffer probabilityBuffer = TensorBuffer.createFixedSize([1, 1001], TfLiteType.uint8);
The type of this buffer should be TfLiteType.float32
and not TfLiteType.uint8
.
More info:
This issue occurs because:
-
You might not be using the input/output buffers of correct shape, type as required by the interpreter. Verify this using
interpreter.getInputTensors()
andinterpreter.getOutputTensors()
-
Probably the values of input buffer are not the expected ones. Maybe you are forgetting to normalize your input. Try using NormalizeOp.
from tflite_flutter_plugin.
@xellDart @am15h Solved the issue by modifiying the shape of input and output tensors. Thanks for the help!
from tflite_flutter_plugin.
@xellDart This could be happening because of bad input to the interpreter. Are you sure that normalization is not required for the input image? Try adding NormalizeOp to the image processing pipeline with appropriate values. Have you tried to find out if the current values of tensorImage.buffer.asUint8List()
are same as the expected ones?
from tflite_flutter_plugin.
Thanks, already worked, I only have one more question...
I have this code in python:
regressors = interpreter.get_tensor(output_details[0]["index"])
classificators = interpreter.get_tensor(output_details[1]["index"])
And with your lib I can get this with
List<Tensor> tensors = _interpreter.getOutputTensors();
Tensor regressors = tensors[0];
Tensor classificators = tensors[1];
My question is, how i can get elements in tensor like this:
Python regressort tensor elements:
regressors:
[ 3.4080482e-01 2.5098801e-02 2.8392452e+01 ... -5.0155647e+01
4.7399345e+01 1.4660334e+01]
In dart i try with get data, but this return integers with length 57344, but when i get numElements, this returns correct value 14336
Thanks
from tflite_flutter_plugin.
how did you solve this issue? I am facing the same error, my code is as follows
File imageFile = File(filePath);
ImageProcessor imageProcessor = ImageProcessorBuilder()
.add(ResizeOp(224, 224, ResizeMethod.NEAREST_NEIGHBOUR))
.build();
TensorImage tensorImage = TensorImage.fromFile(imageFile);
tensorImage = imageProcessor.process(tensorImage);
TensorImage outputImage = new TensorImage();
TensorBuffer probabilityBuffer = TensorBuffer.createFixedSize(<int>[1, 1001], TfLiteType.uint8);
Interpreter interpreter = await Interpreter.fromAsset("models/example.tflite");
interpreter.run(tensorImage.buffer, probabilityBuffer.buffer); File imageFile = File(filePath);
The error is as follows:
Unhandled Exception: Bad state: failed precondition
E/flutter ( 5476): #0 checkState (package:quiver/check.dart:71:5)
E/flutter ( 5476): #1 Tensor.setTo (package:tflite_flutter/src/tensor.dart:150:5)
E/flutter ( 5476): #2 Interpreter.runForMultipleInputs (package:tflite_flutter/src/interpreter.dart:194:33)
E/flutter ( 5476): #3 Interpreter.run (package:tflite_flutter/src/interpreter.dart:165:5)
I have edge detection model which takes an input image and gives an output image. How should I solve this?
from tflite_flutter_plugin.
Just look at the example repo here: https://github.com/am15h/tflite_flutter_helper/tree/master/example/image_classification
from tflite_flutter_plugin.
@xellDart @am15h Solved the issue by modifiying the shape of input and output tensors. Thanks for the help!
So do I,
add
...
_inputImage = TensorImage(_inputType);
_inputImage.loadImage(image);
from tflite_flutter_plugin.
Hi all,
Sorry to still respond on this closed ticket, but I'm running into this while using TfLiteType.float32
The code that I'm trying is at:
print(_interpreter.getInputTensors());
print(_interpreter.getOutputTensors());
prints two times Tensor{_tensor: Pointer<TfLiteTensor>: address=0x72a8fa67c0, name: serving_default_input_1:0, type: TfLiteType.float32
Does anybody have an idea?
cheers
from tflite_flutter_plugin.
this is my _interpreter.getInputTensors()
[Tensor{_tensor: Pointer: address=0xb4000072a21f3030, name: serving_default_images:0, type: uint8, shape: [1, 320, 320, 3], data: 307200}]
iam using both
_inputImage = TensorImage.fromImage(image);
_inputImage = TensorImage(TfLiteType.float32);
_inputImage.loadImage(image);
and still got an issue 😭
need help
from tflite_flutter_plugin.
Related Issues (20)
- Anyone implemented "Movnet lightning/thunder" with Metal Delegate (Experiencing memory issues) HOT 1
- Error (Xcode): Undefined symbol: _TfLiteInterpreterGetSignatureCount HOT 1
- Pub-Cache folder location Mac HOT 4
- Please make support for hexagon delegate HOT 2
- This repo is very great! HOT 1
- ImageEmbedder API HOT 1
- Bad state: failed precondition HOT 4
- Error (Xcode): Framework not found TensorFlowLiteC even I did exactly the instruction for iOS HOT 1
- ffi is not update, Please help
- Feature Request: Enabled Batch Processing
- not able to call the function HOT 1
- I have error when I build object detection desktop app HOT 2
- Null check operator used on a null value in the method interpreter.run([input], [output]); HOT 3
- Invalid argument(s): Failed to lookup symbol 'TFLGpuDelegateCreate' : undefined symbol: TFLGpuDelegateCreate
- Error in model prediction
- I/flutter (16401): Failed to run model: Bad state: failed precondition
- Invalid argument(s): Size of bounding box dimBouBoxension 1 is not 4. Got 25 in shape [1, 25] HOT 1
- Failed to load dynamic library HOT 1
- android Gradle plugin's Kotlin Gradle plugin version and tflite_flutter_helper's version isn't matched HOT 1
- How to build tensorflowlite c++ library for iOS using bazel? (TensorFlowLiteC++_framework)
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from tflite_flutter_plugin.