GithubHelp home page GithubHelp logo

asus4 / tf-lite-unity-sample Goto Github PK

View Code? Open in Web Editor NEW
811.0 31.0 240.0 177.11 MB

TensorFlow Lite Samples on Unity

C# 67.36% ShaderLab 0.86% C 29.72% Python 1.42% C++ 0.06% Objective-C 0.57%
unity tensorflow-lite machine-learning mediapipe

tf-lite-unity-sample's Introduction

TensorFlow Lite for Unity Samples

npm

Porting of "TensorFlow Lite Examples" and some utilities for Unity.

Tested on

  • iOS / Android / macOS / Windows / Linux
  • Unity 2022.3.22f1
  • TensorFlow 2.16.1

Included examples:

  • TensorFlow
    • MNIST
    • SSD Object Detection
    • DeepLab
    • PoseNet
    • Style Transfer
    • Text Classification
    • Bert Question and Answer
    • Super Resolution
    • Audio Classification
  • MediaPipe
    • Hand Tracking
    • Blaze Face
    • Face Mesh
    • Blaze Pose (Full body)
    • Selfie Segmentation

Included prebuilt libraries:

iOS Android macOS Ubuntu Windows
Core CPU
Metal Delegate - - -
GPU Delegate - - ✅ Experimental -
NNAPI Delegate - - - -
  • You need to install OpenGL ES and OpenCL to run GPU Delegate on Linux. See MediaPipe for details.

Install TensorFlow Lite for Unity

Important

You need to install Git-LFS.

  • If you want to try all examples, clone this repository with Git-LFS.
  • If you just need TensorFlow Lite libraries via UPM, open the file Packages/manifest.json and add following lines into scopedRegistries and dependencies section.
{
  "scopedRegistries": [
    {
      "name": "package.openupm.com",
      "url": "https://package.openupm.com",
      "scopes": [
        "com.cysharp.unitask"
      ]
    },
    {
      "name": "npm",
      "url": "https://registry.npmjs.com",
      "scopes": [
        "com.github.asus4"
      ]
    }
  ],
  "dependencies": {
    // Core TensorFlow Lite libraries
    "com.github.asus4.tflite": "2.16.1",
    // Utilities for TFLite
    "com.github.asus4.tflite.common": "2.16.1",
    // Utilities for MediaPipe
    "com.github.asus4.mediapipe": "2.16.1",
    ...// other dependencies
  }
}

Build TensorFlow Lite libraries yourself

Pre-built libraries are included in the UPM package. Also, you can find TFLite libraries at tflite-runtime-builder from TFLite v2.14.0 or later.

If you want to build the latest TFLite yourself, Follow the below instructions:

  1. Clone TensorFlow library
  2. Run ./configure in the TensorFlow library
  3. Run ./build_tflite.py (Python3) to build for each platform
# Update iOS, Android and macOS
./build_tflite.py --tfpath ../tensorflow -ios -android -macos

# Build with XNNPACK
./build_tflite.py --tfpath ../tensorflow -macos -xnnpack

Show Cases

MNIST
Mnist

SSD Object Detection
SSD

DeepLab Semantic Segmentation
DeepLab

Style Transfer
styletransfter

Hand Tracking
handtracking

BERT
BERT

License

Samples folder Assets/Samples/* is licensed under MIT

MIT License

Copyright (c) 2024 Koki Ibukuro

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

Other Licenses

Model Licenses

📌 : Each TensorFlow Lite model might have a different license. Please check the license of the model you use.

tf-lite-unity-sample's People

Contributors

asus4 avatar bawahakim avatar buresu avatar dthul avatar nixon-voxell avatar pabu avatar simonoliver avatar solsnare avatar sonnybsj avatar stakemura avatar ukyoda avatar vikage avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tf-lite-unity-sample's Issues

Vulkan API fails due to ARCore

I have mirrored your technique of setting up TFL for pose tracking in an Android application, and started checking logcat data on your sample project and my project after failing to build a correctly working APK of my own project.

I noticed the same error pattern as described in #15 .Vulkan API then fires up and manages to render the camera input.

05-06 16:28:57.962  6574  6611 D Unity   : Unable to lookup library path for 'libtensorflowlite_gpu_delegate', native render plugin support disabled.
05-06 16:28:57.963  6574  6611 E Unity   : Unable to find libtensorflowlite_gpu_delegate
05-06 16:28:57.969  6574  6611 D Unity   : Unable to lookup library path for 'libtensorflowlite_c', native render plugin support disabled.
05-06 16:28:57.969  6574  6611 E Unity   : Unable to find libtensorflowlite_c
05-06 16:28:57.970  6574  6611 D Unity   : PlayerInitEngineNoGraphics OK
05-06 16:28:57.971  6574  6611 D Unity   : AndroidGraphics::Startup window =  0x78519fd010
05-06 16:28:57.971  6574  6611 D Unity   : [EGL] Attaching window :0x78519fd010
05-06 16:28:58.034  6574  6611 D Unity   : [Vulkan init] extensions: count=9

My project integrates ARCore, and thus prevents me from using the Vulkan API - instead OpenGL ES 3.2 gets launched, which seems to be unable to properly render the camera input to the scene.

Anything I can do here to fix this up?

Need FaceMesh / Blaze Face Example

I'm very happy to see this plugin. We can do tensorflow in Unity and on GPUs ! :).

However, after running these examples, I found some difficulties.

I tried to implement facemesh and blazeface from Mediapipe, it worked but I found:

This plugin is very limited compared with Tensorflow Lite, we could not do any operations (such as slice, div ...) over tensor object (well, there is no tensor object in this plugin, only array of float numbers). I need to write these operations on CPU, or using computer shader, or shader + Graphics.Blit.

Anyway, I hope we could put more efforts on developping this plugin :).

MediaPipe - Iris Tracking Implementation

Hey guys,

first of all you rock! I was waiting for so long having an Unity-package providing TF AND Mediapipe-features. I already tried out several examples on the iPad and except some of them everything is compiling like a charm so excellent job and thank you again very much for providing this. Let me ask you if there is either a way where i can get the iris tracking feature implementend else well or if that's not possible with the current package doin this myself are you guys planning to do so in near future? I'm working on a realtime-gaze-tracker where this could lead to much better results (at least i hope) rather than using old-school-opencv approaches. But again ... awesome job!

tensorflow_lite_gpu_framework missing architecture arm64?

Hi, I tried to compile your example for iOS and get the following error:

.../tf-lite-unity-sample-master/ios_build/Frameworks/TensorFlowLite/Plugins/iOS/tensorflow_lite_gpu_framework.framework/tensorflow_lite_gpu_framework, file was built for archive which is not the architecture being linked (arm64)

I checked the architecture of the tensorflow_lite_gpu_framework with the command lipo -info tensorflow_lite_gpu_framework and get the following output:

Non-fat file: tensorflow_lite_gpu_framework is architecture: x86_64

in contrast, TensorFlowLiteC.framework looks fine. Maybe I am missing something?

Thanks a lot,

Leif

[posenet] poseNet.Invoke(webcamTexture) into another thread

Sometimes poseNet.Invoke (webcamTexture) can be processing power heavy. But it seems that it is not possible to execute it in an external thread.
Unity isn't thread safe. poseNet.Invoke(webcamTexture) can only be executed in the main thread.
Is there a way to do it with Job system or managed multihtread?

Build for Android ARMv7

Thanks for the sample project. It helps me a lot.

I'm new to building App on Unity. I'm really need help from you.
I'm trying to build the sample of Posenet for Android. I can build the Android App with ARM64, it works very well on my mobile. However, it doesn't work on my another android phone. So I tried to build with ARMv7 support. It built successfully but it runs with error. The error is,

AndroidPlayer([email protected]:34999) DllNotFoundException: Unable to load DLL 'libtensorflowlite_gpu_delegate': The specified module could not be found.
at TensorFlowLite.GpuDelegate.TfLiteGpuDelegateOptionsV2Default () [0x00000] in <00000000000000000000000000000000>:0
at TensorFlowLite.GpuDelegate..ctor () [0x00000] in <00000000000000000000000000000000>:0
at TensorFlowLite.BaseImagePredictor1[T].CreateGpuDelegate () [0x00000] in <00000000000000000000000000000000>:0 at TensorFlowLite.BaseImagePredictor1[T]..ctor (System.String modelPath, System.Boolean useGPU) [0x00000] in <00000000000000000000000000000000>:0
at TensorFlowLite.PoseNet..ctor (System.String modelPath) [0x00000] in <00000000000000000000000000000000>:0
at PoseNetSample.Start () [0x00000] in <00000000000000000000000000000000>:0

What is the problem? Anything that I can do?

Error on "interpreter.AllocateTensors()" while using my own model

First, I like to thank you for the tf-lite on the unity. I try to use my own model like the image below
Annotation 2020-07-04 144913

I try to load the model from your DeepLab and SSD example. Because my model is the SSD too. But I got error at the
line "interpreter.AllocateTensors()" and "interpreter.ResizeInputTensor()".

I think maybe this is because of my model input format. So, can I ask you about the workaround to load my model from your example source codes. Thank you.

I need help mnist sample buffer setting

i created many different tensorflow lite model ( 3 channel or 56x56 or 150x150)
but this model is not good result
because i dont idea how to ComputeShader and ComputeBuffer Setting

Please let me know it
ss

error: Apple Mach-O Linker(Id)Error, 20 duplicate symbols for architecture arm64

Hi, I use Git-LFS download the really iOS framework , But When I build in Xcode, I got the error:

Ld /Users/jookitsui/Library/Developer/Xcode/DerivedData/Unity-iPhone-bakojvifnzgcachjxtdntfmekqxg/Build/Products/Debug-iphoneos/UnityFramework.framework/UnityFramework normal arm64
    cd /Users/jookitsui/bbwansha/titian_yuwen/Builds/ios
    export PATH="/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/usr/bin:/Applications/Xcode.app/Contents/Developer/usr/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin"
    /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang++ -target arm64-apple-ios10.0 -dynamiclib -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS13.6.sdk -L/Users/jookitsui/Library/Developer/Xcode/DerivedData/Unity-iPhone-bakojvifnzgcachjxtdntfmekqxg/Build/Products/Debug-iphoneos -L/Users/jookitsui/bbwansha/titian_yuwen/Builds/ios/Libraries -F/Users/jookitsui/Library/Developer/Xcode/DerivedData/Unity-iPhone-bakojvifnzgcachjxtdntfmekqxg/Build/Products/Debug-iphoneos -F/Users/jookitsui/bbwansha/titian_yuwen/Builds/ios/Frameworks/TensorFlowLite/Plugins/iOS -filelist /Users/jookitsui/Library/Developer/Xcode/DerivedData/Unity-iPhone-bakojvifnzgcachjxtdntfmekqxg/Build/Intermediates.noindex/Unity-iPhone.build/Debug-iphoneos/UnityFramework.build/Objects-normal/arm64/UnityFramework.LinkFileList -install_name @rpath/UnityFramework.framework/UnityFramework -Xlinker -rpath -Xlinker @executable_path/Frameworks -Xlinker -rpath -Xlinker @loader_path/Frameworks -Xlinker -map -Xlinker /Users/jookitsui/Library/Developer/Xcode/DerivedData/Unity-iPhone-bakojvifnzgcachjxtdntfmekqxg/Build/Intermediates.noindex/Unity-iPhone.build/Debug-iphoneos/UnityFramework.build/UnityFramework-LinkMap-normal-arm64.txt -dead_strip -Xlinker -object_path_lto -Xlinker /Users/jookitsui/Library/Developer/Xcode/DerivedData/Unity-iPhone-bakojvifnzgcachjxtdntfmekqxg/Build/Intermediates.noindex/Unity-iPhone.build/Debug-iphoneos/UnityFramework.build/Objects-normal/arm64/UnityFramework_lto.o -Xlinker -no_deduplicate -fembed-bitcode-marker -stdlib=libc++ -fobjc-arc -fobjc-link-runtime -weak_framework CoreMotion -weak-lSystem -ObjC -framework CoreTelephony -liPhone-lib -framework Security -framework MediaToolbox -framework CoreText -framework AudioToolbox -weak_framework AVFoundation -framework AVKit -framework CFNetwork -framework CoreGraphics -framework CoreMedia -weak_framework CoreMotion -framework CoreVideo -framework Foundation -framework OpenAL -framework OpenGLES -framework QuartzCore -framework SystemConfiguration -framework UIKit -liconv.2 -lil2cpp -framework tensorflow_lite_gpu_framework -framework TensorFlowLiteC -weak_framework Metal -Xlinker -dependency_info -Xlinker /Users/jookitsui/Library/Developer/Xcode/DerivedData/Unity-iPhone-bakojvifnzgcachjxtdntfmekqxg/Build/Intermediates.noindex/Unity-iPhone.build/Debug-iphoneos/UnityFramework.build/Objects-normal/arm64/UnityFramework_dependency_info.dat -o /Users/jookitsui/Library/Developer/Xcode/DerivedData/Unity-iPhone-bakojvifnzgcachjxtdntfmekqxg/Build/Products/Debug-iphoneos/UnityFramework.framework/UnityFramework

ld: warning: arm64 function not 4-byte aligned: _unwind_tester from /Users/jookitsui/bbwansha/titian_yuwen/Builds/ios/Libraries/libiPhone-lib.a(unwind_test_arm64.o)
duplicate symbol '_OBJC_CLASS_$_TFLBufferConvert' in:
    /Users/jookitsui/bbwansha/titian_yuwen/Builds/ios/Frameworks/TensorFlowLite/Plugins/iOS/tensorflow_lite_gpu_framework.framework/tensorflow_lite_gpu_framework(buffer_convert_4c6415a43f9d8b0f5f2c3c820a73836d.o)
    /Users/jookitsui/bbwansha/titian_yuwen/Builds/ios/Frameworks/TensorFlowLite/Plugins/iOS/TensorFlowLiteC.framework/TensorFlowLiteC(buffer_convert_1a8577b686ad6fd069f9bf8253b9c536.o)
duplicate symbol '_OBJC_METACLASS_$_TFLBufferConvert' in:
    /Users/jookitsui/bbwansha/titian_yuwen/Builds/ios/Frameworks/TensorFlowLite/Plugins/iOS/tensorflow_lite_gpu_framework.framework/tensorflow_lite_gpu_framework(buffer_convert_4c6415a43f9d8b0f5f2c3c820a73836d.o)
    /Users/jookitsui/bbwansha/titian_yuwen/Builds/ios/Frameworks/TensorFlowLite/Plugins/iOS/TensorFlowLiteC.framework/TensorFlowLiteC(buffer_convert_1a8577b686ad6fd069f9bf8253b9c536.o)
duplicate symbol '_OBJC_IVAR_$_TFLBufferConvert._program' in:
    /Users/jookitsui/bbwansha/titian_yuwen/Builds/ios/Frameworks/TensorFlowLite/Plugins/iOS/tensorflow_lite_gpu_framework.framework/tensorflow_lite_gpu_framework(buffer_convert_4c6415a43f9d8b0f5f2c3c820a73836d.o)
    /Users/jookitsui/bbwansha/titian_yuwen/Builds/ios/Frameworks/TensorFlowLite/Plugins/iOS/TensorFlowLiteC.framework/TensorFlowLiteC(buffer_convert_1a8577b686ad6fd069f9bf8253b9c536.o)
duplicate symbol '_OBJC_CLASS_$_TFLInferenceContext' in:
    /Users/jookitsui/bbwansha/titian_yuwen/Builds/ios/Frameworks/TensorFlowLite/Plugins/iOS/tensorflow_lite_gpu_framework.framework/tensorflow_lite_gpu_framework(inference_context_4388ba43df0236018021471a95720412.o)
    /Users/jookitsui/bbwansha/titian_yuwen/Builds/ios/Frameworks/TensorFlowLite/Plugins/iOS/TensorFlowLiteC.framework/TensorFlowLiteC(inference_context_3b4506901a38350266cb9ff0d71fb2ad.o)
duplicate symbol '_OBJC_METACLASS_$_TFLInferenceContext' in:
    /Users/jookitsui/bbwansha/titian_yuwen/Builds/ios/Frameworks/TensorFlowLite/Plugins/iOS/tensorflow_lite_gpu_framework.framework/tensorflow_lite_gpu_framework(inference_context_4388ba43df0236018021471a95720412.o)
    /Users/jookitsui/bbwansha/titian_yuwen/Builds/ios/Frameworks/TensorFlowLite/Plugins/iOS/TensorFlowLiteC.framework/TensorFlowLiteC(inference_context_3b4506901a38350266cb9ff0d71fb2ad.o)
duplicate symbol '_OBJC_IVAR_$_TFLInferenceContext._options' in:
    /Users/jookitsui/bbwansha/titian_yuwen/Builds/ios/Frameworks/TensorFlowLite/Plugins/iOS/tensorflow_lite_gpu_framework.framework/tensorflow_lite_gpu_framework(inference_context_4388ba43df0236018021471a95720412.o)
    /Users/jookitsui/bbwansha/titian_yuwen/Builds/ios/Frameworks/TensorFlowLite/Plugins/iOS/TensorFlowLiteC.framework/TensorFlowLiteC(inference_context_3b4506901a38350266cb9ff0d71fb2ad.o)
duplicate symbol '_OBJC_IVAR_$_TFLInferenceContext._computeTasks' in:
    /Users/jookitsui/bbwansha/titian_yuwen/Builds/ios/Frameworks/TensorFlowLite/Plugins/iOS/tensorflow_lite_gpu_framework.framework/tensorflow_lite_gpu_framework(inference_context_4388ba43df0236018021471a95720412.o)
    /Users/jookitsui/bbwansha/titian_yuwen/Builds/ios/Frameworks/TensorFlowLite/Plugins/iOS/TensorFlowLiteC.framework/TensorFlowLiteC(inference_context_3b4506901a38350266cb9ff0d71fb2ad.o)
duplicate symbol '_OBJC_IVAR_$_TFLInferenceContext._outputIds' in:
    /Users/jookitsui/bbwansha/titian_yuwen/Builds/ios/Frameworks/TensorFlowLite/Plugins/iOS/tensorflow_lite_gpu_framework.framework/tensorflow_lite_gpu_framework(inference_context_4388ba43df0236018021471a95720412.o)
    /Users/jookitsui/bbwansha/titian_yuwen/Builds/ios/Frameworks/TensorFlowLite/Plugins/iOS/TensorFlowLiteC.framework/TensorFlowLiteC(inference_context_3b4506901a38350266cb9ff0d71fb2ad.o)
duplicate symbol '_OBJC_IVAR_$_TFLInferenceContext._device' in:
    /Users/jookitsui/bbwansha/titian_yuwen/Builds/ios/Frameworks/TensorFlowLite/Plugins/iOS/tensorflow_lite_gpu_framework.framework/tensorflow_lite_gpu_framework(inference_context_4388ba43df0236018021471a95720412.o)
    /Users/jookitsui/bbwansha/titian_yuwen/Builds/ios/Frameworks/TensorFlowLite/Plugins/iOS/TensorFlowLiteC.framework/TensorFlowLiteC(inference_context_3b4506901a38350266cb9ff0d71fb2ad.o)
duplicate symbol '_OBJC_IVAR_$_TFLComputeTask._groupsCount' in:
    /Users/jookitsui/bbwansha/titian_yuwen/Builds/ios/Frameworks/TensorFlowLite/Plugins/iOS/tensorflow_lite_gpu_framework.framework/tensorflow_lite_gpu_framework(compute_task_3a62ce7feb0bef230c7ce3c0a61aa542.o)
    /Users/jookitsui/bbwansha/titian_yuwen/Builds/ios/Frameworks/TensorFlowLite/Plugins/iOS/TensorFlowLiteC.framework/TensorFlowLiteC(compute_task_b8b24220c272524ddb5f7078254c21d9.o)
duplicate symbol '_OBJC_IVAR_$_TFLComputeTask._outputBuffers' in:
    /Users/jookitsui/bbwansha/titian_yuwen/Builds/ios/Frameworks/TensorFlowLite/Plugins/iOS/tensorflow_lite_gpu_framework.framework/tensorflow_lite_gpu_framework(compute_task_3a62ce7feb0bef230c7ce3c0a61aa542.o)
    /Users/jookitsui/bbwansha/titian_yuwen/Builds/ios/Frameworks/TensorFlowLite/Plugins/iOS/TensorFlowLiteC.framework/TensorFlowLiteC(compute_task_b8b24220c272524ddb5f7078254c21d9.o)
duplicate symbol '_OBJC_IVAR_$_TFLComputeTask._inputBuffers' in:
    /Users/jookitsui/bbwansha/titian_yuwen/Builds/ios/Frameworks/TensorFlowLite/Plugins/iOS/tensorflow_lite_gpu_framework.framework/tensorflow_lite_gpu_framework(compute_task_3a62ce7feb0bef230c7ce3c0a61aa542.o)
    /Users/jookitsui/bbwansha/titian_yuwen/Builds/ios/Frameworks/TensorFlowLite/Plugins/iOS/TensorFlowLiteC.framework/TensorFlowLiteC(compute_task_b8b24220c272524ddb5f7078254c21d9.o)
duplicate symbol '_OBJC_IVAR_$_TFLComputeTask._uniformBuffers' in:
    /Users/jookitsui/bbwansha/titian_yuwen/Builds/ios/Frameworks/TensorFlowLite/Plugins/iOS/tensorflow_lite_gpu_framework.framework/tensorflow_lite_gpu_framework(compute_task_3a62ce7feb0bef230c7ce3c0a61aa542.o)
    /Users/jookitsui/bbwansha/titian_yuwen/Builds/ios/Frameworks/TensorFlowLite/Plugins/iOS/TensorFlowLiteC.framework/TensorFlowLiteC(compute_task_b8b24220c272524ddb5f7078254c21d9.o)
duplicate symbol '_OBJC_IVAR_$_TFLComputeTask._immutableBuffers' in:
    /Users/jookitsui/bbwansha/titian_yuwen/Builds/ios/Frameworks/TensorFlowLite/Plugins/iOS/tensorflow_lite_gpu_framework.framework/tensorflow_lite_gpu_framework(compute_task_3a62ce7feb0bef230c7ce3c0a61aa542.o)
    /Users/jookitsui/bbwansha/titian_yuwen/Builds/ios/Frameworks/TensorFlowLite/Plugins/iOS/TensorFlowLiteC.framework/TensorFlowLiteC(compute_task_b8b24220c272524ddb5f7078254c21d9.o)
duplicate symbol '_OBJC_IVAR_$_TFLComputeTask._description' in:
    /Users/jookitsui/bbwansha/titian_yuwen/Builds/ios/Frameworks/TensorFlowLite/Plugins/iOS/tensorflow_lite_gpu_framework.framework/tensorflow_lite_gpu_framework(compute_task_3a62ce7feb0bef230c7ce3c0a61aa542.o)
    /Users/jookitsui/bbwansha/titian_yuwen/Builds/ios/Frameworks/TensorFlowLite/Plugins/iOS/TensorFlowLiteC.framework/TensorFlowLiteC(compute_task_b8b24220c272524ddb5f7078254c21d9.o)
duplicate symbol '_OBJC_IVAR_$_TFLComputeTask._resizeFunction' in:
    /Users/jookitsui/bbwansha/titian_yuwen/Builds/ios/Frameworks/TensorFlowLite/Plugins/iOS/tensorflow_lite_gpu_framework.framework/tensorflow_lite_gpu_framework(compute_task_3a62ce7feb0bef230c7ce3c0a61aa542.o)
    /Users/jookitsui/bbwansha/titian_yuwen/Builds/ios/Frameworks/TensorFlowLite/Plugins/iOS/TensorFlowLiteC.framework/TensorFlowLiteC(compute_task_b8b24220c272524ddb5f7078254c21d9.o)
duplicate symbol '_OBJC_IVAR_$_TFLComputeTask._program' in:
    /Users/jookitsui/bbwansha/titian_yuwen/Builds/ios/Frameworks/TensorFlowLite/Plugins/iOS/tensorflow_lite_gpu_framework.framework/tensorflow_lite_gpu_framework(compute_task_3a62ce7feb0bef230c7ce3c0a61aa542.o)
    /Users/jookitsui/bbwansha/titian_yuwen/Builds/ios/Frameworks/TensorFlowLite/Plugins/iOS/TensorFlowLiteC.framework/TensorFlowLiteC(compute_task_b8b24220c272524ddb5f7078254c21d9.o)
duplicate symbol '_OBJC_CLASS_$_TFLComputeTask' in:
    /Users/jookitsui/bbwansha/titian_yuwen/Builds/ios/Frameworks/TensorFlowLite/Plugins/iOS/tensorflow_lite_gpu_framework.framework/tensorflow_lite_gpu_framework(compute_task_3a62ce7feb0bef230c7ce3c0a61aa542.o)
    /Users/jookitsui/bbwansha/titian_yuwen/Builds/ios/Frameworks/TensorFlowLite/Plugins/iOS/TensorFlowLiteC.framework/TensorFlowLiteC(compute_task_b8b24220c272524ddb5f7078254c21d9.o)
duplicate symbol '_OBJC_METACLASS_$_TFLComputeTask' in:
    /Users/jookitsui/bbwansha/titian_yuwen/Builds/ios/Frameworks/TensorFlowLite/Plugins/iOS/tensorflow_lite_gpu_framework.framework/tensorflow_lite_gpu_framework(compute_task_3a62ce7feb0bef230c7ce3c0a61aa542.o)
    /Users/jookitsui/bbwansha/titian_yuwen/Builds/ios/Frameworks/TensorFlowLite/Plugins/iOS/TensorFlowLiteC.framework/TensorFlowLiteC(compute_task_b8b24220c272524ddb5f7078254c21d9.o)
duplicate symbol '_OBJC_IVAR_$_TFLComputeTask._groupsSize' in:
    /Users/jookitsui/bbwansha/titian_yuwen/Builds/ios/Frameworks/TensorFlowLite/Plugins/iOS/tensorflow_lite_gpu_framework.framework/tensorflow_lite_gpu_framework(compute_task_3a62ce7feb0bef230c7ce3c0a61aa542.o)
    /Users/jookitsui/bbwansha/titian_yuwen/Builds/ios/Frameworks/TensorFlowLite/Plugins/iOS/TensorFlowLiteC.framework/TensorFlowLiteC(compute_task_b8b24220c272524ddb5f7078254c21d9.o)
ld: 20 duplicate symbols for architecture arm64
clang: error: linker command failed with exit code 1 (use -v to see invocation)

I have no idea what I can do. Could you do me a favor ?

Widescreen Option

Hi

I tried to edit the palm detection so it supports a 16:9 aspect ratio, but it forces the anchor count to be 2944 so it throws an error because of that. Then it seems to get the width and height from the tf interpreter saying that the width and height must be 256x256. Is there a way to use a 16:9 ratio, or do I have to create a new model that supports that aspect ratio?

Example SSD scene frames don't always align with image

Environment (please complete the following information):

  • OS/OS Version: [Android 11]
  • Source Version: [e.g. master/v2.3.0]
  • Unity Version: [e.g. Unity 2020.1.6f1]

Describe the bug
In the SSD example when run on Android in portait orientation the raw image and green frames can misalign when the raw image is not centered

To Reproduce
Steps to reproduce the behavior:

  1. Go to SSD scene
  2. Move raw image up to top
  3. Test on Android in portrait mode

Expected behavior
Frames should overlay the raw image

Screenshots
alt text

Additional context
I thought this would be a simple fix but I can't work it out especially as it doesn't happen in editor. Might be shader related? Even any suggestions on how I could fix it would be great, all I want is the raw image to be at the top of the screen! In landscape mode it works as expected alt text

Memory Leak on Android

The SSD sample scene contains a memory leak on Android and possibly other platforms too.

Steps to reproduce: load the SSD scene and leave it for a while.

Memory Leak

Processing speedup

Hi, I was able to implement a custom model based on TFlite. The issue is that it takes very long time to compute. Is there a way to increase performance?

Thanks

Error while loading the tflite model

@asus4
2020-06-16 20:59:43.895 31324-31350/? I/Unity: intput 0: name: input_1, type: Float32, dimensions: [1,512,512,4], quantizationParams: {scale: 0 zeroPoint: 0}
TensorFlowLite.InterpreterExtension:LogIOInfo(Interpreter)
TensorFlowLite.BaseImagePredictor`1:.ctor(String, Boolean)
DeepLab:.ctor(String, Boolean)
DeepLabSample:Start()

(Filename: ./Runtime/Export/Debug/Debug.bindings.h Line: 35)

2020-06-16 20:59:43.896 31324-31350/? I/Unity: output 0: name: conv2d_transpose_4, type: Float32, dimensions: [1,512,512,2], quantizationParams: {scale: 0 zeroPoint: 0}
TensorFlowLite.InterpreterExtension:LogIOInfo(Interpreter)
TensorFlowLite.BaseImagePredictor`1:.ctor(String, Boolean)
DeepLab:.ctor(String, Boolean)
DeepLabSample:Start()

(Filename: ./Runtime/Export/Debug/Debug.bindings.h Line: 35)

2020-06-16 20:59:43.926 31324-31350/? I/chatty: uid=10665(com.asus4.tfliteunitysample) UnityMain identical 1 line
2020-06-16 20:59:43.919 31324-31324/? W/UnityMain: type=1400 audit(0.0:30503): avc: denied { read } for name="u:object_r:persist_camera_prop:s0" dev="tmpfs" ino=5830 scontext=u:r:untrusted_app_27:s0:c153,c258,c512,c768 tcontext=u:object_r:persist_camera_prop:s0 tclass=file permissive=0
2020-06-16 20:59:43.919 31324-31324/? I/chatty: uid=10665(com.asus4.tfliteunitysample) identical 4 lines
2020-06-16 20:59:43.919 31324-31324/? W/UnityMain: type=1400 audit(0.0:30508): avc: denied { read } for name="u:object_r:persist_camera_prop:s0" dev="tmpfs" ino=5830 scontext=u:r:untrusted_app_27:s0:c153,c258,c512,c768 tcontext=u:object_r:persist_camera_prop:s0 tclass=file permissive=0
2020-06-16 20:59:43.948 1309-8226/? I/CameraService: CameraService::connect call (PID -1 "com.asus4.tfliteunitysample", camera ID 1) for HAL version default and Camera API version 2
2020-06-16 20:59:43.949 1309-8226/? I/CameraService: CameraService::connect Bypass active check: com.asus4.tfliteunitysample (active: 1)
2020-06-16 20:59:43.949 1309-8226/? I/Camera2ClientBase: Camera 1: Opened. Client: com.asus4.tfliteunitysample (PID 31324, UID 10665)
2020-06-16 20:59:43.949 1309-8226/? I/CameraService: ctrl camera by motor_socket E, ctrl_buffer(1,10665,31324,com.asus4.tfliteunitysample,1)
2020-06-16 20:59:43.952 2261-3067/? D/OpNotificationController: opActiveChanged, op: 26, uid: 10665, packageName: com.asus4.tfliteunitysample, active: true
2020-06-16 20:59:43.968 870-19276/? I/CHIUSECASE: [OP_EXT] 000, com.asus4.tfliteunitysample
2020-06-16 20:59:44.010 31324-31350/? E/Unity: ArgumentException: ComputeBuffer.GetData() : Accessing 4194304 bytes at offset 0 for Compute Buffer of size 3145728 bytes is not possible.
at TensorFlowLite.BaseImagePredictor`1[T].ToTensor (UnityEngine.Texture inputTex, System.Single[,,] inputs) [0x00000] in <00000000000000000000000000000000>:0
at DeepLab.Invoke (UnityEngine.Texture inputTex) [0x00000] in <00000000000000000000000000000000>:0
at DeepLabSample.Update () [0x00000] in <00000000000000000000000000000000>:0

(Filename: currently not available on il2cpp Line: -1)

Crashing app in mobile

Unity version : 2019.3.3f1
Platform : Mobile(Android)

App crashing after 4 minutes.
Any help would be appreciated even if it is not related to this.
Thank you.

2020-07-29 11:50:38.357 28624-28624/? E/MtpService: onCreate. 2020-07-29 11:50:38.358 28624-28624/? E/MtpService: mContext is NULL so getting the getApplicationContext 2020-07-29 11:50:38.359 28624-28624/? E/MtpService: < MTP > Registering BroadCast registerBroadCastPolicyRec ::::: 2020-07-29 11:50:38.361 28624-28624/? E/MtpService: < MTP > Registering BroadCast registerBroadCastEmergencyRec ::::: 2020-07-29 11:50:38.362 28624-28624/? E/MtpService: onStartCommand. 2020-07-29 11:50:38.362 28624-28646/? E/MtpService: handleMessage. msg= { when=0 what=0 arg1=3 target=com.samsung.android.MtpApplication.MtpService$ServiceHandler } 2020-07-29 11:50:38.378 28624-28646/? E/MtpService: mContext is NULL so getting the getApplicationContext

2020-07-29 11:50:41.419 27014-28558/? E/Auth: [GoogleAccountDataServiceImpl] getToken() -> BAD_AUTHENTICATION. Account: ELLIDED:615807374, App: com.google.android.apps.maps, Service: oauth2:https://www.googleapis.com/auth/mobilemaps.firstparty https://www.googleapis.com/auth/notifications
rpt: Long live credential not available.
at gde.a(:com.google.android.gms@[email protected] (120400-319035315):18)
at gbn.a(:com.google.android.gms@[email protected] (120400-319035315):124)
at cro.a(:com.google.android.gms@[email protected] (120400-319035315):238)
at cro.a(:com.google.android.gms@[email protected] (120400-319035315):111)
at cro.a(:com.google.android.gms@[email protected] (120400-319035315):246)
at cpm.onTransact(:com.google.android.gms@[email protected] (120400-319035315):5)
at android.os.Binder.transact(Binder.java:914)
at ctj.onTransact(:com.google.android.gms@[email protected] (120400-319035315):2)
at android.os.Binder.transact(Binder.java:914)
at aadb.onTransact(:com.google.android.gms@[email protected] (120400-319035315):17)
at android.os.Binder.execTransactInternal(Binder.java:1021)
at android.os.Binder.execTransact(Binder.java:994)
2020-07-29 11:50:41.420 27014-28558/? W/Auth: [GetToken] GetToken failed with status code: BadAuthentication
2020-07-29 11:50:41.467 1260-1352/? E/Watchdog: !@sync 3813 [2020-07-29 11:50:41.465] FD count : 582, wdog_way : pwdt

SSD Demo is not working with Teachable Exports

Thank you for posting this example and plugin. It's a great addition for Unity3d.

I'm having a small problem with using custom data from Google's Teachable. The Exported Tensorflow Lite quantized models are showing an error in Unity.

I've tested a few test exports using TensorFlow Lite > image_classification IOS example and they all work without an issue.

Any idea why these exported models won't work in Unity with the plug-In?

The following are my steps with more details:

Teachable:
https://teachablemachine.withgoogle.com/train/image

  1. Creating 3 classes with 20 photos in each class
  2. Training the model with their default settings
  3. Selecting "Export Model"
  4. Select 3rd tab (Tensorflow Lite) with Quantized selected
  5. Select "Download my model"

It'll only take about 5 minutes to create a sample model for a test.

Unity Error

Exception: TensorFlowLite operation failed.
TensorFlowLite.Interpreter.ThrowIfError (TensorFlowLite.Interpreter+Status status) (at Assets/TensorFlowLite/SDK/Scripts/Interpreter.cs:196)
TensorFlowLite.Interpreter.GetOutputTensorData (System.Int32 outputTensorIndex, System.Array outputTensorData) (at Assets/TensorFlowLite/SDK/Scripts/Interpreter.cs:147)
TensorFlowLite.SSD.Invoke (UnityEngine.Texture inputTex) (at Assets/Samples/SSD/SSD.cs:34)
SsdSample.Update () (at Assets/Samples/SSD/SsdSample.cs:62)

Thanks in advance for your help.

failed to build for iOS

Environment (please complete the following information):

  • OS: iOS
  • Version latest

Describe the bug
After building for iOS from Unity, while building the app from XCode, there are linker errors.

Screenshots
If applicable, add screenshots to help explain your problem.
ld: warning: ignoring file /Users/seunghun/Desktop/ios/Frameworks/com.github.asus4.tflite/Plugins/iOS/tensorflow_lite_gpu_framework.framework/tensorflow_lite_gpu_framework, building for iOS-arm64 but attempting to link with file built for unknown-unsupported file format ( 0x76 0x65 0x72 0x73 0x69 0x6F 0x6E 0x20 0x68 0x74 0x74 0x70 0x73 0x3A 0x2F 0x2F )
ld: warning: ignoring file /Users/seunghun/Desktop/ios/Frameworks/com.github.asus4.tflite/Plugins/iOS/TensorFlowLiteC.framework/TensorFlowLiteC, building for iOS-arm64 but attempting to link with file built for unknown-unsupported file format ( 0x76 0x65 0x72 0x73 0x69 0x6F 0x6E 0x20 0x68 0x74 0x74 0x70 0x73 0x3A 0x2F 0x2F )
Undefined symbols for architecture arm64:
"_TFLGpuDelegateCreate", referenced from:
_InterpreterOptions_CreateGpuDelegate_m9CF4B5134A4CA45B0D6930F21745584B965A948C in TensorFlowLite.o
_MetalDelegate__ctor_m413ED5FE1CE943B2B30692F6792FE54F632B77F6 in TensorFlowLite.o
_MetalDelegate_TFLGpuDelegateCreate_m4CA4AB8D8DD023BE7D39A7D101A9819BEEBD6B6E in TensorFlowLite.o
(maybe you meant: _MetalDelegate_TFLGpuDelegateCreate_m4CA4AB8D8DD023BE7D39A7D101A9819BEEBD6B6E)
"_TfLiteInterpreterOptionsSetErrorReporter", referenced from:
_InterpreterOptions__ctor_mD75762A156970C68CF0CD31C42736CA2CF3B813E in TensorFlowLite.o
_InterpreterOptions_TfLiteInterpreterOptionsSetErrorReporter_mBAEFDC811B5245633A6F21ED3718FB6096EE95D3 in TensorFlowLite.o
(maybe you meant: _InterpreterOptions_TfLiteInterpreterOptionsSetErrorReporter_mBAEFDC811B5245633A6F21ED3718FB6096EE95D3)
"_TfLiteTensorName", referenced from:
_Interpreter_GetTensorName_m789E4ACB99A50F75B84681F8CC500287003FCB3B in TensorFlowLite.o
_Interpreter_TfLiteTensorName_m45B4CF90530E0171F63BA303BB95444C18056218 in TensorFlowLite.o
(maybe you meant: _Interpreter_TfLiteTensorName_m45B4CF90530E0171F63BA303BB95444C18056218)
"_TfLiteTensorDim", referenced from:
_Interpreter_GetTensorInfo_mA5E98D07988B26FF9926FF3802099B51F6C00CC8 in TensorFlowLite.o
_Interpreter_TfLiteTensorDim_m694EDEC3C38A07E30B52F99375F6388F44C278BE in TensorFlowLite.o
(maybe you meant: _Interpreter_TfLiteTensorDim_m694EDEC3C38A07E30B52F99375F6388F44C278BE)
"_TfLiteInterpreterGetInputTensorCount", referenced from:
_Interpreter_GetInputTensorCount_m92ABB6CAD4F368066E5ECC80C60D5DA27A69E303 in TensorFlowLite.o
_Interpreter_TfLiteInterpreterGetInputTensorCount_m7FFD7F21300E23D99D5E4CDE922C9E5951148E3A in TensorFlowLite.o
(maybe you meant: _Interpreter_TfLiteInterpreterGetInputTensorCount_m7FFD7F21300E23D99D5E4CDE922C9E5951148E3A)
"_TfLiteInterpreterOptionsSetNumThreads", referenced from:
_InterpreterOptions_set_threads_m56CE436B165FDDB99C3BB1ED481D51A839A3CE67 in TensorFlowLite.o
_InterpreterOptions_TfLiteInterpreterOptionsSetNumThreads_mAFE3FC8181C0BFF2BED6A9CF06E64B100B3CD1C2 in TensorFlowLite.o
(maybe you meant: _InterpreterOptions_TfLiteInterpreterOptionsSetNumThreads_mAFE3FC8181C0BFF2BED6A9CF06E64B100B3CD1C2)
"_TfLiteInterpreterOptionsCreate", referenced from:
_InterpreterOptions__ctor_mD75762A156970C68CF0CD31C42736CA2CF3B813E in TensorFlowLite.o
_InterpreterOptions_TfLiteInterpreterOptionsCreate_m1502A66E89FFE410ADBB49E103DA3765AC84157A in TensorFlowLite.o
(maybe you meant: _InterpreterOptions_TfLiteInterpreterOptionsCreate_m1502A66E89FFE410ADBB49E103DA3765AC84157A)
"_TfLiteTensorQuantizationParams", referenced from:
_Interpreter_GetTensorInfo_mA5E98D07988B26FF9926FF3802099B51F6C00CC8 in TensorFlowLite.o
_Interpreter_TfLiteTensorQuantizationParams_mDF84C6660781360E49AA58A532F74A6B5333393C in TensorFlowLite.o
(maybe you meant: _Interpreter_TfLiteTensorQuantizationParams_mDF84C6660781360E49AA58A532F74A6B5333393C)
"_TfLiteInterpreterGetOutputTensor", referenced from:
_Interpreter_GetOutputTensorData_mC553A53E5016BA9A81FD061A85C1557714C622E5 in TensorFlowLite.o
_Interpreter_TfLiteInterpreterGetOutputTensor_mF0D6705F26BE2D49C4CDFA59E5038B4C98B0836E in TensorFlowLite.o
_Interpreter_GetOutputTensorInfo_mA1040095C912D1220F50B8BF9F1808E222C20ECF in TensorFlowLite.o
(maybe you meant: _Interpreter_TfLiteInterpreterGetOutputTensor_mF0D6705F26BE2D49C4CDFA59E5038B4C98B0836E, _Interpreter_TfLiteInterpreterGetOutputTensorCount_m2606C10C1C4D7DF41B561FBC88DEF679FA0F4437 )
"_TFLGpuDelegateDelete", referenced from:
_MetalDelegate_Dispose_m8DA638B035C3A79924DD4B53919BDBFD50A0DF5A in TensorFlowLite.o
_MetalDelegate_TFLGpuDelegateDelete_m68AABE0EF59C7D31866CD51BDC5FA5AE9BBFCE7E in TensorFlowLite.o
(maybe you meant: _MetalDelegate_TFLGpuDelegateDelete_m68AABE0EF59C7D31866CD51BDC5FA5AE9BBFCE7E)
"_TfLiteTensorType", referenced from:
_Interpreter_GetTensorInfo_mA5E98D07988B26FF9926FF3802099B51F6C00CC8 in TensorFlowLite.o
_Interpreter_TfLiteTensorType_m5C720268382B76E918FE028B140BA3D6C91DE0FB in TensorFlowLite.o
(maybe you meant: _Interpreter_TfLiteTensorType_m5C720268382B76E918FE028B140BA3D6C91DE0FB)
"_TfLiteInterpreterGetInputTensor", referenced from:
_Interpreter_SetInputTensorData_m8E2595ECA226081E831D54C19B3035E86315604A in TensorFlowLite.o
_Interpreter_TfLiteInterpreterGetInputTensor_mCAADED55C4420901610DD5734D4AD3F5BA83DC0A in TensorFlowLite.o
_Interpreter_GetInputTensorInfo_m057FCA64EF9DCF993358A9B508136FDE0BFB780A in TensorFlowLite.o
(maybe you meant: _Interpreter_TfLiteInterpreterGetInputTensorCount_m7FFD7F21300E23D99D5E4CDE922C9E5951148E3A, _Interpreter_TfLiteInterpreterGetInputTensor_mCAADED55C4420901610DD5734D4AD3F5BA83DC0A )
"_TfLiteTensorNumDims", referenced from:
_Interpreter_GetTensorInfo_mA5E98D07988B26FF9926FF3802099B51F6C00CC8 in TensorFlowLite.o
_Interpreter_TfLiteTensorNumDims_m3B7B8BB15E5C8808CE4B430C571F8E975A4210B4 in TensorFlowLite.o
(maybe you meant: _Interpreter_TfLiteTensorNumDims_m3B7B8BB15E5C8808CE4B430C571F8E975A4210B4)
"_TfLiteTensorCopyToBuffer", referenced from:
_Interpreter_GetOutputTensorData_mC553A53E5016BA9A81FD061A85C1557714C622E5 in TensorFlowLite.o
_Interpreter_TfLiteTensorCopyToBuffer_m05FF7E7947B3D1936634A234254F2CE81F1551BF in TensorFlowLite.o
(maybe you meant: _Interpreter_TfLiteTensorCopyToBuffer_m05FF7E7947B3D1936634A234254F2CE81F1551BF)
"_TfLiteInterpreterAllocateTensors", referenced from:
_Interpreter_AllocateTensors_m87C3D29EA509021F9B6E30E3E717548BC0781DB3 in TensorFlowLite.o
_Interpreter_TfLiteInterpreterAllocateTensors_m7BA0CBCECD65A1FC061ABD5B3D3DD1775499404A in TensorFlowLite.o
(maybe you meant: _Interpreter_TfLiteInterpreterAllocateTensors_m7BA0CBCECD65A1FC061ABD5B3D3DD1775499404A)
"_TfLiteTensorCopyFromBuffer", referenced from:
_Interpreter_SetInputTensorData_m8E2595ECA226081E831D54C19B3035E86315604A in TensorFlowLite.o
_Interpreter_TfLiteTensorCopyFromBuffer_m214FACD0A3EA3896904B8B27F5E4B1A8CE13499A in TensorFlowLite.o
(maybe you meant: _Interpreter_TfLiteTensorCopyFromBuffer_m214FACD0A3EA3896904B8B27F5E4B1A8CE13499A)
"_TfLiteInterpreterGetOutputTensorCount", referenced from:
_Interpreter_GetOutputTensorCount_m96463F4D302EA3D4B5C9563A968801B49BD1B7B8 in TensorFlowLite.o
_Interpreter_TfLiteInterpreterGetOutputTensorCount_m2606C10C1C4D7DF41B561FBC88DEF679FA0F4437 in TensorFlowLite.o
(maybe you meant: _Interpreter_TfLiteInterpreterGetOutputTensorCount_m2606C10C1C4D7DF41B561FBC88DEF679FA0F4437)
"_TfLiteInterpreterInvoke", referenced from:
_Interpreter_Invoke_m228B38FCA1CCEDE758CBA924CC325BC7C8732AA6 in TensorFlowLite.o
_Interpreter_TfLiteInterpreterInvoke_mFB177A3C0D7EDF73045D8E952E051435BEF24CBF in TensorFlowLite.o
(maybe you meant: _Interpreter_TfLiteInterpreterInvoke_mFB177A3C0D7EDF73045D8E952E051435BEF24CBF)
"_TfLiteInterpreterResizeInputTensor", referenced from:
_Interpreter_ResizeInputTensor_mEA17BDF5F7ADE68652E3606CF4B744683AEA668A in TensorFlowLite.o
_Interpreter_TfLiteInterpreterResizeInputTensor_m307802B59A2200D95F48B17F40416578BF696968 in TensorFlowLite.o
(maybe you meant: _Interpreter_TfLiteInterpreterResizeInputTensor_m307802B59A2200D95F48B17F40416578BF696968)
"_TfLiteModelDelete", referenced from:
_Interpreter_Dispose_mA3DB3B46FF7C3047FCFC47CC627D71488C8061B5 in TensorFlowLite.o
_Interpreter_TfLiteModelDelete_m600AE3A0AA1FB351BD00F07417086A15815C5A60 in TensorFlowLite.o
(maybe you meant: _Interpreter_TfLiteModelDelete_m600AE3A0AA1FB351BD00F07417086A15815C5A60)
"_TfLiteModelCreate", referenced from:
_Interpreter__ctor_mF1E8928B8C931C4786E6D5302D200E7616B676D3 in TensorFlowLite.o
_Interpreter_TfLiteModelCreate_m89DA4A8D9E41006AC6460276528DE61E1CA594C9 in TensorFlowLite.o
(maybe you meant: _Interpreter_TfLiteModelCreate_m89DA4A8D9E41006AC6460276528DE61E1CA594C9)
"_TfLiteInterpreterOptionsDelete", referenced from:
_InterpreterOptions_Dispose_m324112C3E24DC6FE78463DFFCEBD5EC38D022236 in TensorFlowLite.o
_InterpreterOptions_TfLiteInterpreterOptionsDelete_m1932391AAF4EA1B7F387CF0B61A34D0B860BABAD in TensorFlowLite.o
(maybe you meant: _InterpreterOptions_TfLiteInterpreterOptionsDelete_m1932391AAF4EA1B7F387CF0B61A34D0B860BABAD)
"_TfLiteInterpreterOptionsAddDelegate", referenced from:
_InterpreterOptions_AddGpuDelegate_m028C34BD98FB94386DCBAC1952494D5159D92893 in TensorFlowLite.o
_InterpreterOptions_TfLiteInterpreterOptionsAddDelegate_m163B9B3951EC9F7B971426F9E4C24F91C1901134 in TensorFlowLite.o
(maybe you meant: _InterpreterOptions_TfLiteInterpreterOptionsAddDelegate_m163B9B3951EC9F7B971426F9E4C24F91C1901134)
"_TfLiteInterpreterDelete", referenced from:
_Interpreter_Dispose_mA3DB3B46FF7C3047FCFC47CC627D71488C8061B5 in TensorFlowLite.o
_Interpreter_TfLiteInterpreterDelete_mDBADEA03FF68E11454A0305C9C8F71744A5236F6 in TensorFlowLite.o
(maybe you meant: _Interpreter_TfLiteInterpreterDelete_mDBADEA03FF68E11454A0305C9C8F71744A5236F6)
"_TfLiteInterpreterCreate", referenced from:
_Interpreter__ctor_mF1E8928B8C931C4786E6D5302D200E7616B676D3 in TensorFlowLite.o
_Interpreter_TfLiteInterpreterCreate_m59679647837EBEADA0D889BF124811535293D1A5 in TensorFlowLite.o
(maybe you meant: _Interpreter_TfLiteInterpreterCreate_m59679647837EBEADA0D889BF124811535293D1A5)
ld: symbol(s) not found for architecture arm64
clang: error: linker command failed with exit code 1 (use -v to see invocation)

Cannot show 3D model over ImageRaw(WebCamera)

I load a 3D model in ssd example , but it will be occluded by WebCamera(RawImage). I guess it is due to Hidden/TFLite/Resize shader. How can I render a 3D in front of the WebCamera, do you have some suggestions. Thank you:)

Failed TensorflowLite Operation

The input shape of my model is [ 1 256 256 3]

I'm getting the following errror,

Exception: TensorFlowLite operation failed.
TensorFlowLite.Interpreter.ThrowIfError (System.Int32 resultCode)

How do I fix this?

I have invoked the Model as follows,

model.Invoke(background.texture);

background is the rawimage. I tried to save the raw image to png image and ocourse, it is not null.

Following is my Invoke function,

public override void Invoke(Texture inputTex)
        {
            ToTensor(inputTex, inputs);

            Debug.LogWarning(inputTex.height + " " + inputs.Length);
            interpreter.SetInputTensorData(0,  inputs);
            interpreter.Invoke();
        }

How do I fix this error?

Not working on Android

Environment

  • OS/OS Version: Andoroid 9
  • Unity Version: Unity 2019.4.12.f1

Describe the bug
All the samples runs well on the PC (using PC webcam) but when I build and run for Android only a white rectangle appears and nothing happens. The only sample I was able to run succesfully on Android is TestWebCamRotation.
I tried different combination of Vulkan and OpenGLES3 with no success. Any hint?

Note
You did a really good job. Keep going!

WebCamTexture has wrong rotation in mobile phone

When I tried running the SSD sample on Windows the camera view works perfectly fine but on an android phone the camera view on the screen is upside down. How do I fix the orientation of the camera view?

DLL not found exception

  • Android 10 (Google pixel 3a).
  • Source Version: 2.3.0.
  • Unity Version: Unity 2019.4.6.f1.

PoseNet not working on android. Logcat produces a DLL not found exception for libtensorflowlite_c.dll.

To reproduce, add package to a new unity project using openupm and use posenet on mobile.

libtensorflowlite_c.dll issue

I got this error when i try to run. please let me know

Plugins: Failed to load 'Assets/plugins/Windows/libtensorflowlite_c.dll' because one or more of its dependencies could not be loaded.
TensorFlowLite.BaseImagePredictor`1:.ctor(String, Boolean) (at Assets/Samples/Common/BaseImagePredictor.cs:34)
TensorFlowLite.DeepLab:.ctor(String, ComputeShader) (at Assets/Samples/DeepLab/DeepLab.cs:51)
DeepLabSample:Start() (at Assets/Samples/DeepLab/DeepLabSample.cs:22)

Custom model (AutoML vision edge lite model): Tensorflow operation failed exception

Hi,
We created a custom model using google Auto ML Vision edge kit for object detection.
and applied the model to SSD unity sample scene.
when running on Android device the model gets loaded correctly and also displays the tensor info..please check Log1.png file.
But the SSD Invoke function thows exception while getting tensor output details..please check Log2.png file
Log1
Log2

Also attached the exported model and label inside zip folder
exported_model.zip

Please help me in resolving this issue
Thanks

BlazePose for video use cases

すみません、日本語で失礼します。

tf-lite-unity-sample をとても参考にさせていただいております。ありがとうございます。
BlazePose についていくつか質問などさせてください。

BlazePose の認識があまり安定しない点について

まとめ

  • BlazePose の認識が安定しなかった
  • tf-lite-unity-sample では detector (PoseDetect.cs) を毎フレーム実行しているが、必要な時のみに変更すると安定した
  • 差分: ganaware@2758b47 (※1)

経緯

これは私の環境 (PC と Pixel 4a) のせいなのかもしれませんが
tf-lite-unity-sample での BlazePose での認識が安定せず、
MediaPipe のサイトで体験できるデモで安定して認識できるのと比べると
かなり精度が劣るという印象でした。

そこで、ソースを詳しく眺めたところ、実装されている two-step detector-tracker ML pipeline が、
MediaPipe で実装されているものとは少し実装が異なることに気が付きました。

MediaPipe のドキュメントによれば、ビデオに対して適用する場合は
detector (PoseDetect.cs) を実行するのは
最初のフレームや tracker (PoseLandmarkDetect.cs) で姿勢を認識できなかった時のみとし、
通常は tracker のみで認識を行うようです。

そこで試しにそのように実装してみたところ ganaware@2758b47 、とても安定して認識できるようになったと思います。
(この差分内では、いくつかの定数をデモで使用されている値へ変更してあります)

全身認識時の定数の出どころについて

まとめ

  • PoseLandmarkDetectFullBody では PoseScale が 1.8 に設定されている
  • 1.8 という値の出どころを見つけることができなかった

1.8 という値の出どころを教えていただけないでしょうか?
また、全身の認識をしているコードのソースなどご存じでしたら教えていただけないでしょうか?

経緯

上記 ※1 のように修正したところ、上半身の認識は安定して行うことができるのですが、
全身の認識がなかなかうまくいかず、
tracker から得られる keypoint に妙なスケールをかける必要が出てきました。 (※2)

                if (mode == Mode.FullBody)
                {
                    p.x = (p.x - 0.5f) * 0.84f + 0.5f; // WHY?
                    p.y = (p.y - 0.5f) * 0.84f + 0.5f; // WHY?
                }

これは、tf-lite-unity-sample で実装されている CalcCropMatrix() などの座標変換を
私があまり理解できていないせいかもしれません。

上半身の場合は MediaPipe のサイトで体験できるデモ のコードと
MediaPipe 自体のソースを読むことで理解できたと思うのですが、全身の場合のコードを見つけることができず、
なぜ ※2 のような事態になってしまったのか、解決にたどり着けませんでした。

そこで 1.8 という値の出どころや、
全身の認識をしているコードのソースなどご存じでしたら教えていただけないでしょうか?

[PoseNet]The camera's screen is upside down

Environment (please complete the following information):

  • OS/OS Version: macOS 10.15.6]
  • Source Version: [e.g. master/v2.2.0]
  • Unity Version: [e.g. Unity 2019.4.5.f1]

Describe the bug
[PoseNet]The camera's screen is upside down

Troubles with Quantized TFLite models

Hi,

Firstly thanks for this repo, exemples are clear and works perflectly well. I succesfully integrate my own UNET model using your Deeplab scene implementation.

But today i tried to get more deeper inside optimisation. And i tried to use Quantized models.
I took mobilenetv2_coco_voc_trainaug_8bit from the official list (https://github.com/tensorflow/models/blob/master/research/deeplab/g3doc/quantize.md) And injected it to the Deeplab scene.
I fixed some issues do to the fact that ArgMax is already made inside the model and the output is [1, 513, 513] instead of the previously [1, 513, 513, 21]

Your DeepLab class became:


 public class DeepLab2 : BaseImagePredictor<float>
    {
        ...

        float[,] outputs0; // height, width

        ...

        public DeepLab2(string modelPath, ComputeShader compute) : base(modelPath, true)
        {
            var odim0 = interpreter.GetOutputTensorInfo(0).shape;

            Debug.Assert(odim0[1] == height);
            Debug.Assert(odim0[2] == width);

            outputs0 = new float[odim0[1], odim0[2]];
            labelPixels = new Color32[width * height];
            labelTex2D = new Texture2D(width, height, TextureFormat.RGBA32, 0, false);
            ...

        }

       ....

        public override void Invoke(Texture inputTex)
        {
            ToTensor(inputTex, inputs);

            interpreter.SetInputTensorData(0, inputs);
            interpreter.Invoke();
            interpreter.GetOutputTensorData(0, outputs0);
        }

       ...

        public Texture2D GetResultTexture2D()
        {

            int rows = outputs0.GetLength(0); // y
            int cols = outputs0.GetLength(1); // x
            // int labels = outputs0.GetLength(2);
            for (int y = 0; y < rows; y++)
            {
                for (int x = 0; x < cols; x++)
                {
                    labelPixels[y * cols + x] = COLOR_TABLE[(int)outputs0[x,y]];
                }
            }

            labelTex2D.SetPixels32(labelPixels);
            labelTex2D.Apply();

            return labelTex2D;
        }

        ...
    }

But now i get an error from Interpreter.cs:

Exception: TensorFlowLite operation failed.
TensorFlowLite.Interpreter.ThrowIfError (System.Int32 resultCode) (at Assets/TensorFlowLite/SDK/Scripts/Interpreter.cs:196)
TensorFlowLite.Interpreter.SetInputTensorData (System.Int32 inputTensorIndex, System.Array inputTensorData) (at Assets/TensorFlowLite/SDK/Scripts/Interpreter.cs:122)
TensorFlowLite.DeepLab2.Invoke (UnityEngine.Texture inputTex) (at Assets/Samples/DeepLab513/DeepLab2.cs:97)
DeepLabSample2.Execute (UnityEngine.Texture texture) (at Assets/Samples/DeepLab513/DeepLabSample2.cs:45)
DeepLabSample2.Update () (at Assets/Samples/DeepLab513/DeepLabSample2.cs:40)

If you have any idea :)

Thank you in advance !
Best regards,
--Selim

More info than issue.

Hello.

Could you please tell me how did you compile the tf lite library files for the different platforms? are they available online?

Best Regards,
Nihal Kenkre

iOS linking error

Environment (please complete the following information):

  • OS/OS Version: iOS 13.5.1
  • Source Version: master/v2.2.0
  • Unity Version: 2019.4.5.f1

Describe the bug

  • Linking error
  • Git LFS already setup
  • framework folders exists.
cd Frameworks/com.github.asus4.tflite/Plugins/iOS
ls
TensorFlowLiteC.framework
_TensorFlowLiteC.framework
tensorflow_lite_gpu_framework.framework

To Reproduce
Steps to reproduce the behavior:

  1. build for iOS framework from Unity
  2. open XCode and build & run

Expected behavior
Run iOS app without error.

Screenshots
Screen Shot 2020-08-19 at 1 43 55 PM

DllNotFoundException libtensorflowlite_c

I Try to import the plugins to my project.
I Imported "Common"、 "TensorFlowLite" this 2 folders in my project.
OSX Editor is work fine. But Andriod is failed with "DllNotFoundException libtensorflowlite_c"
Is there anything I missing to import ?

Custom model

How do I use my own custom model for the SSD sample?

How to build NNAPI delegate?

Hi, thanks for your nice work.
I want use the NNAPI delegate like the GPU delegate.
Does anybody has any idea?

Tensorflow Operation Failed for EMNIST Balanced

EMNIST TFlite model for letters and numbers
The model I'm using has 47 output classes, but when I set the length of the outputs to 47 I'm getting a tensorflow lite operation failed error

image

Error:
xception: TensorFlowLite operation failed.
TensorFlowLite.Interpreter.ThrowIfError (System.Int32 resultCode) (at Assets/TensorFlowLite/SDK/Scripts/Interpreter.cs:196)
TensorFlowLite.Interpreter.GetOutputTensorData (System.Int32 outputTensorIndex, System.Array outputTensorData) (at Assets/TensorFlowLite/SDK/Scripts/Interpreter.cs:147)
TextClassifier.Infer (UnityEngine.RenderTexture texture) (at Assets/Scenes/Text Classifier/TextClassifier.cs:78)
LineDrawer+d__24.MoveNext () (at Assets/Scenes/Text Classifier/LineDrawer.cs:161)
UnityEngine.SetupCoroutine.InvokeMoveNext (System.Collections.IEnumerator enumerator, System.IntPtr returnValueAddress) (at <480508088aee40cab70818ff164a29d5>:0)

FaceMesh rotated on Face Detection example

Hello guys,

I've tried your great face detection example on an iPad Pro and it's working out of the box so excellent job on that. The only issue I'm facing right now is, that the rendered face mesh is rotated 90 degrees on z and 180 degrees on y. I already tried to fix this by shooting some quaternions onto the face.keypoints-array but with no luck. But maybe you could help me with that.
Thanks so much!

added: its a wild guess but can it be due to unity's left handed coordinates? (although what's weird is is the fact that the quad + face detection dots are right orriented and are following correctly ... it's just the face mesh itself that seems to be rotated and z-wise scaled wrongly) ... another guess would be that i might be due to the ipad-cam-texure that is normaly also rotated by 90 degrees

Use for audio models

Hello, this is a great project! thanks!
I don't have a lot of experience on deep learning. I would like to use a tflite speech command recognition model I downloaded with your example project, but I'm having trouble with the input shape.

I've been trying different things and I sometimes get:

  • That the input must be an array of primitives
  • TensorFlowLite operation failed
  • Sometimes it just crashes and unity closes.

Have you tried any audio model on your project? Is there a known issue with it?

Any clue that helps me fix it will be more than welcome!

Fail to run my own sample after build.

The sample from this repo work perfect on Both Unity Editor and my Android phone.
I am working on my own PoseNet by modifying some architecture and inputs(I don't think that matters). It works perfectly well in Unity Editor, but after build it just fails.
At the beginning, I thought it might be the path issue. But I am sure that the .tflite file was placed in StreamingAssets folder. Then I dive into the logs from adb, I found those below:
04-24 20:31:36.370: D/Unity(16500): Unable to lookup library path for 'libtensorflowlite_gpu_delegate', native render plugin support disabled. 04-24 20:31:36.370: E/Unity(16500): Unable to find libtensorflowlite_gpu_delegate 04-24 20:31:36.373: D/Unity(16500): Unable to lookup library path for 'libtensorflowlite_c', native render plugin support disabled. 04-24 20:31:36.374: E/Unity(16500): Unable to find libtensorflowlite_c

But It occurs both in your samples and my code, so I don't think it is the main issue.
Then I found:
04-24 20:31:39.215: I/Unity(16500): (Filename: ./Runtime/Export/Debug/Debug.bindings.h Line: 35) 04-24 20:31:39.615: E/Unity(16500): Exception: Failed to create TensorFlowLite Interpreter 04-24 20:31:39.615: E/Unity(16500): at TensorFlowLite.Interpreter..ctor (System.Byte[] modelData, TensorFlowLite.Interpreter+Options options) [0x00000] in <00000000000000000000000000000000>:0 04-24 20:31:39.615: E/Unity(16500): at TensorFlowLite.BaseImagePredictor1[T]..ctor (System.String modelPath, System.Boolean useGPU) [0x00000] in <00000000000000000000000000000000>:0
04-24 20:31:39.615: E/Unity(16500): at PoseNet2..ctor (System.String modelPath) [0x00000] in <00000000000000000000000000000000>:0
04-24 20:31:39.615: E/Unity(16500): at MyPoseScript.Start () [0x00000] in `

I spent almost two days, trying to solve this problem but failed. Could you give me some advice? Any help will be appreciated.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.