Comments (9)
@tanpengshi The FlexDelegate functionality is integrated within the TensorFlow Lite interpreter itself when built with Select Ops enabled. Could you please upgrade to the latest TF version and let us know as the older TF versions are not actively supported. Thank you!
from tensorflow.
I have tried invoking the interpreter and I get the error
guard let interpreter = tflite else {
print("TFLite Error: Interpreter is nil.")
return
}
do {
let inputData = Data(buffer: UnsafeBufferPointer(start: dataBuffer, count: dataBuffer.count))
try interpreter.copy(inputData, toInputAt: 0)
try interpreter.invoke()
let output = getTensorOutput(interpreter: interpreter)
let detectedActionIndex = output.argmax()
print("Detected Action Index is: \(detectedActionIndex)")
} catch {
print("TFLite Error: \(error.localizedDescription)")
}
TensorFlow Lite Error: Select TensorFlow op(s), included in the given model, is(are) not supported by this interpreter. Make sure you apply/link the Flex delegate before inference. For the Android, it can be resolved by adding "org.tensorflow:tensorflow-lite-select-tf-ops" dependency. See instructions: https://www.tensorflow.org/lite/guide/ops_select
TensorFlow Lite Error: Node number 76 (FlexTensorListReserve) failed to prepare.
TFLite Error: Must call allocateTensors().
I have already "import TensorFlowLiteSelectTfOps" in my Swift file.
I have accidentally deleted the CoreMLDelegate and also the MetalDelegate from the library. Would it cause this error? I have deleted these libraries and I don't know how to recover them. When I do:
rm -rf Pods Podfile.lock
pod install
The files are still missing. I really appreciate your help! I have come very close to the solution for my App!
from tensorflow.
@tanpengshi Directly recovering deleted libraries from within your project isn't possible. Could you try to use the latest TFlite version as newer versions might include these delegates within the framework itself, eliminating the need for separate libraries. After reinstalling/updating, clean and rebuild your project to ensure the changes take effect.
Thank you!
from tensorflow.
Are you suggesting that I recreate a new project?
Now i proceed to try a more conventional method using the Podfile:
# Uncomment the next line to define a global platform for your project
platform :ios, '17.0'
target 'FacialRecognition' do
# Comment the next line if you don't want to use dynamic frameworks
use_frameworks!
#pod 'TensorFlowLiteSwift', :path => '../../local-podspecs/TensorFlowLiteSwift.podspec'
# pod 'TensorFlowLiteSelectTfOps', :path => '../../local-podspecs/TensorFlowLiteSelectTfOps.podspec'
# pod 'TensorFlowLiteSwift' # or 'TensorFlowLiteObjC'
# pod 'TensorFlowLiteSelectTfOps', '~> 0.0.1-nightly'
pod 'TensorFlowLiteSwift'
pod 'TensorFlowLiteSelectTfOps', '~> 0.0.1-nightly'
# Pods for FacialRecognition
target 'FacialRecognitionTests' do
inherit! :search_paths
# Pods for testing
end
target 'FacialRecognitionUITests' do
# Pods for testing
end
# Add these lines to ensure consistent EXCLUDED_ARCHS settings
post_install do |installer|
installer.pods_project.targets.each do |target|
target.build_configurations.each do |config|
config.build_settings['EXCLUDED_ARCHS[sdk=iphonesimulator*]'] = 'arm64'
end
end
end
end
But when I build my project, I get:
When I check the 'Pods' directory, I don't see a 'resources-to-copy-FacialRecognition' file
from tensorflow.
I am now able to solved the issue above and the TensorFlow Lite model with SelectOps is able to successfully run by following the suggestion here:
However my app size is over 200MB because of the library! Hence, I need to do a Selective Build which effectively return me to the first solution. In my Podfile, I did for the first solution:
platform :ios, '17.0'
target 'FacialRecognition' do
# Comment the next line if you don't want to use dynamic frameworks
use_frameworks!
pod 'TensorFlowLiteSwift', :path => '../../local-podspecs/TensorFlowLiteSwift.podspec'
# Pods for FacialRecognition
target 'FacialRecognitionTests' do
inherit! :search_paths
# Pods for testing
end
target 'FacialRecognitionUITests' do
# Pods for testing
end
# Add these lines to ensure consistent EXCLUDED_ARCHS settings
post_install do |installer|
installer.pods_project.targets.each do |target|
target.build_configurations.each do |config|
config.build_settings['EXCLUDED_ARCHS[sdk=iphonesimulator*]'] = 'arm64'
end
end
end
end
In my TensorFlowLiteSwift.podspec file, I have:
Pod::Spec.new do |s|
s.name = 'TensorFlowLiteSwift'
s.version = '2.7.0'
s.authors = 'Google Inc.'
s.license = { :type => 'Apache' }
s.homepage = 'https://github.com/tensorflow/tensorflow'
s.source = { :git => 'https://github.com/tensorflow/tensorflow.git', :tag => "v#{s.version}" }
s.summary = 'TensorFlow Lite for Swift'
s.description = <<-DESC
TensorFlow Lite is TensorFlow's lightweight solution for Swift developers. It
enables low-latency inference of on-device machine learning models with a
small binary size and fast performance supporting hardware acceleration.
DESC
s.ios.deployment_target = '17.0'
s.module_name = 'TensorFlowLite'
s.static_framework = true
tfl_dir = 'tensorflow/lite/'
swift_dir = tfl_dir + 'swift/'
s.default_subspec = 'Core'
s.subspec 'Core' do |core|
# Adjust the path to point to your custom frameworks
core.vendored_frameworks = [
'frameworks/TensorFlowLiteC.framework',
'frameworks/TensorFlowLiteSelectTfOps.framework'
]
core.source_files = swift_dir + 'Sources/*.swift'
core.exclude_files = swift_dir + 'Sources/{CoreML,Metal}Delegate.swift'
core.test_spec 'Tests' do |ts|
ts.source_files = swift_dir + 'Tests/*.swift'
ts.exclude_files = swift_dir + 'Tests/MetalDelegateTests.swift'
ts.resources = [
tfl_dir + 'testdata/add.bin',
tfl_dir + 'testdata/add_quantized.bin',
]
end
end
s.subspec 'CoreML' do |coreml|
coreml.source_files = swift_dir + 'Sources/CoreMLDelegate.swift'
coreml.dependency 'TensorFlowLiteSwift/Core', "#{s.version}"
end
s.subspec 'Metal' do |metal|
metal.source_files = swift_dir + 'Sources/MetalDelegate.swift'
metal.dependency 'TensorFlowLiteSwift/Core', "#{s.version}"
metal.test_spec 'Tests' do |ts|
ts.source_files = swift_dir + 'Tests/{Interpreter,MetalDelegate}Tests.swift'
ts.resources = [
tfl_dir + 'testdata/add.bin',
tfl_dir + 'testdata/add_quantized.bin',
tfl_dir + 'testdata/multi_add.bin',
]
end
end
end
But using this method, with MetalDelegate and CoreMLDelegate libraries deleted, I have FlexDelegate not found error.
from tensorflow.
@tanpengshi Thank you for the update. Glad the issue has been resolved.
Could you please let us know if we can close the issue?
Thank you!
from tensorflow.
Sure, we can close this issue, and I will open a next one regarding the new issue
from tensorflow.
Are you satisfied with the resolution of your issue?
Yes
No
from tensorflow.
@tanpengshi Thank you!
from tensorflow.
Related Issues (20)
- Tensorflow Developer certificate didnt recieved yet HOT 1
- TFLite for LSTM: Downscale accumulation from 32-bit to 16-bit before applying to activation HOT 2
- TypeError: len is not well defined for a symbolic Tensor (rnn_decoder_1/gru_1/Squeeze:0). Please call `x.shape` rather than `len(x)` for shape information. HOT 3
- dynamic input shape with InferenceRunner HOT 1
- Trouble Running TensorFlow v2.16.1 with NVIDIA GeForce 940MX GPU #914 HOT 2
- There is no target called wheel HOT 2
- TensorFlow Cuda in Docker under WSL2 not wokring HOT 15
- "CUDA_ERROR_NOT_FOUND: named symbol not found" in Docker container HOT 10
- There was no error when converting the lite model but an error occurred when calling the Interpreter allocate_tensors() method. It will appear if the Conv1D data_format parameter is set to channels_first and the dilation_rate parameter > 1 HOT 2
- Issue with Tesnorflow JS Face Detection on Production HOT 4
- [RNN] LSTM Model conversion error after upgrading to tf 2.16.1 from 2.15 HOT 3
- Training model with the Poisson loss function and the Adam optimizer resulted in NaN loss HOT 2
- Bazel compiling source code failed because of highwayhash/sip_hash.cc HOT 4
- segmentation fault when tf.histogram_fixed_width receives large `value_range` and `nbins` on CPU mode
- Wrong explanation about an argument of tflite interpreter HOT 1
- Not able to build TensorFlow with GPU support HOT 2
- ValueError: `validation_split` is only supported for Tensors or NumPy arrays, found following types in the input: [<class 'int'>] HOT 2
- __add__ with floating point values HOT 1
- TypeError: Expected int32, got 1e-07 of type 'float' instead. HOT 6
- Current tensorflow[and-cuda] installed by pip pulls ptxas which causes Jupyter kernel restart HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from tensorflow.