GithubHelp home page GithubHelp logo

iwatake2222 / play_with_tflite Goto Github PK

View Code? Open in Web Editor NEW
344.0 16.0 78.0 9.78 MB

Sample projects for TensorFlow Lite in C++ with delegates such as GPU, EdgeTPU, XNNPACK, NNAPI

License: Apache License 2.0

CMake 2.06% C++ 97.31% Java 0.44% C 0.12% Jupyter Notebook 0.06% Shell 0.01% Python 0.01%
tensorflow tensorflow-lite cpp opencv edgetpu deep-learning

play_with_tflite's Introduction

Play with tflite

  • Sample projects to use TensorFlow Lite in C++ for multi-platform
  • Typical project structure is like the following diagram
    • 00_doc/design.jpg

Target

  • Platform
    • Linux (x64)
    • Linux (armv7)
    • Linux (aarch64)
    • Android (aarch64)
    • Windows (x64). Visual Studio 2019
  • Delegate
    • Edge TPU
    • XNNPACK
    • GPU
    • NNAPI(CPU, GPU, DSP)

Usage

./main [input]

 - input = blank
    - use the default image file set in source code (main.cpp)
    - e.g. ./main
 - input = *.mp4, *.avi, *.webm
    - use video file
    - e.g. ./main test.mp4
 - input = *.jpg, *.png, *.bmp
    - use image file
    - e.g. ./main test.jpg
 - input = number (e.g. 0, 1, 2, ...)
    - use camera
    - e.g. ./main 0

How to build a project

0. Requirements

  • OpenCV 4.x

1. Download

  • Download source code and pre-built libraries
    git clone https://github.com/iwatake2222/play_with_tflite.git
    cd play_with_tflite
    git submodule update --init
    sh InferenceHelper/third_party/download_prebuilt_libraries.sh
  • Download models
    sh ./download_resource.sh

2-a. Build in Linux

cd pj_tflite_cls_mobilenet_v2   # for example
mkdir -p build && cd build
cmake ..
make
./main

2-b. Build in Windows (Visual Studio)

  • Configure and Generate a new project using cmake-gui for Visual Studio 2019 64-bit
    • Where is the source code : path-to-play_with_tflite/pj_tflite_cls_mobilenet_v2 (for example)
    • Where to build the binaries : path-to-build (any)
  • Open main.sln
  • Set main project as a startup project, then build and run!

2-c. Build in Android Studio

  • Please refer to
  • Copy resource directory to /storage/emulated/0/Android/data/com.iwatake.viewandroidtflite/files/Documents/resource
    • the directory will be created after running the app (so the first run should fail because model files cannot be read)
  • Modify ViewAndroid\app\src\main\cpp\CMakeLists.txt to select a image processor you want to use
    • set(ImageProcessor_DIR "${CMAKE_CURRENT_LIST_DIR}/../../../../../pj_tflite_cls_mobilenet_v2/image_processor")
    • replace pj_tflite_cls_mobilenet_v2 to another
  • By default, InferenceHelper::TENSORFLOW_LITE_DELEGATE_XNNPACK is used. You can modify ViewAndroid\app\src\main\cpp\CMakeLists.txt to select which delegate to use. It's better to use InferenceHelper::TENSORFLOW_LITE_GPU to get high performance.
    • You also need to select framework when calling InferenceHelper::create .

Note

Options (Delegate)

# Edge TPU
cmake .. -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_EDGETPU=on  -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_GPU=off -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_XNNPACK=off
cp libedgetpu.so.1.0 libedgetpu.so.1
#export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:`pwd`
sudo LD_LIBRARY_PATH=./ ./main
# you may get "Segmentation fault (core dumped)" without sudo

# GPU
cmake .. -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_EDGETPU=off -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_GPU=on  -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_XNNPACK=off
# you may need `sudo apt install ocl-icd-opencl-dev` or `sudo apt install libgles2-mesa-dev`

# XNNPACK
cmake .. -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_EDGETPU=off -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_GPU=off -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_XNNPACK=on

# NNAPI (Note: You use Android for NNAPI. Therefore, you will modify CMakeLists.txt in Android Studio rather than the following command)
cmake .. -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_EDGETPU=off -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_GPU=off -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_XNNPACK=off -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_NNAPI=on

You also need to select framework when calling InferenceHelper::create .

EdgeTPU

NNAPI

By default, NNAPI will select the most appropreate accelerator for the model. You can specify which accelerator to use by yourself. Modify the following code in InferenceHelperTensorflowLite.cpp

// options.accelerator_name = "qti-default";
// options.accelerator_name = "qti-dsp";
// options.accelerator_name = "qti-gpu";

License

Acknowledgements

  • This project utilizes OSS (Open Source Software)
  • This project utilizes models from other projects:
    • Please find model_information.md in resource.zip

play_with_tflite's People

Contributors

iwatake2222 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

play_with_tflite's Issues

convert_model_to_tflite

I converted the model to tflite using the yolov5 repo. Then run on your repo and get the error: Invalid input tensor info (input_1:0).
We wish to be answered. Thanks very much!!!
link: ultralytics/yolov5#251

How to run YOLOv5 on edge TPU

Hello,
Thank you for sharing this code.
I am trying to run YOLOv5 on edge TPU but have many issues.

I downloaded YOLOv5s model (model_float32.tflite) from the following link.
https://github.com/PINTO0309/PINTO_model_zoo/blob/main/059_yolov5/22_yolov5s_new/download.sh

And I changed the inference helper in detection_engine.cpp and cmake option described in the README.

inference_helper_.reset(InferenceHelper::Create(InferenceHelper::kTensorflowLiteEdgetpu));                                       

I was able to run it but it seems that the model is running on CPU not TPU.
Am I missing something?

Also, is the model (model_float32.tflite) the right model for TPU? Don't we need to convert the model to edge TPU model using edge tpu compiler?

Thanks

Discrepancy in Model Performance Compared to Example

Environment

  • TensorflowLite
  • Ubuntu 22.04

Issue Details

I've been using the example code provided in the repository (https://github.com/iwatake2222/play_with_tflite/tree/master/pj_tflite_face_landmark_with_attention) to work with the face mesh with attention model. However, I've noticed a significant difference in performance compared to the results shared in another example I found online (https://storage.googleapis.com/tfjs-models/demos/face-landmarks-detection/index.html?model=mediapipe_face_mesh). I am running the code from a virtual machine with Ubuntu 22.04. I followed the steps to set up TensorFlow Lite and haven't made any changes to the test script (main.cpp). Below are two images that illustrate the difference in performance between models.

Capture4

image

Just wanted to drop a line to say a big thanks in advance to anyone who can help out with this.

Error on make for not-x64

Thanks for building this project iwatake. I'm trying to build an EdgeTPU CPP example for my Coral Dev board.

I was able to build and run the binary for x86 using your instructions. But when I tried for arm-v8 it failed with the following error:

(base) shariqm@bigsir:~/code/coral/EdgeTPU_CPP/project_classification_tflite/build$ cmake .. -DARCH_TYPE=aarch64 -DUSE_EDGETPU=on
... [success]
(base) shariqm@bigsir:~/code/coral/EdgeTPU_CPP/project_classification_tflite/build$ make
Scanning dependencies of target main
[ 50%] Building CXX object CMakeFiles/main.dir/main.cpp.o
[100%] Linking CXX executable main
/usr/bin/ld: ../../third_party/tensorflow_prebuilt/generic-aarch64_armv8-a/lib/libtensorflow-lite.a(interpreter.o): Relocations in generic ELF (EM: 183)
...
/usr/bin/ld: ../../third_party/tensorflow_prebuilt/generic-aarch64_armv8-a/lib/libtensorflow-lite.a(interpreter.o): Relocations in generic ELF (EM: 183)
../../third_party/tensorflow_prebuilt/generic-aarch64_armv8-a/lib/libtensorflow-lite.a: error adding symbols: File in wrong format

I think it's some type of cross-compiling issue but I'm not sure. Any idea? I'm using Ubuntu 18.04 so perhaps that's a problem:

(base) shariqm@bigsir:~/code/coral/EdgeTPU_CPP/project_classification_tflite/build$ uname -a
Linux bigsir 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
(base) shariqm@bigsir:~/code/coral/EdgeTPU_CPP/project_classification_tflite/build$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 18.04.2 LTS
Release:        18.04
Codename:       bionic

Recipe for target 'main' failed

Hi, I was trying to build according to the guide from readme, everything was still fine 'till I ran 'cmake ..'
But when I hit 'make' step, the system gave me this:

/home/leduy99/project/play_with_tflite/InferenceHelper/third_party/cmakes/../tflite_prebuilt/ubuntu/libtensorflowlite.so: undefined reference to `typeinfo for std::thread::_State@GLIBCXX_3.4.22'
/home/leduy99/project/play_with_tflite/InferenceHelper/third_party/cmakes/../tflite_prebuilt/ubuntu/libtensorflowlite.so: undefined reference to `std::thread::_M_start_thread(std::unique_ptr<std::thread::_State, std::default_delete<std::thread::_State> >, void (*)())@GLIBCXX_3.4.22'
/home/leduy99/project/play_with_tflite/InferenceHelper/third_party/cmakes/../tflite_prebuilt/ubuntu/libtensorflowlite.so: undefined reference to `std::thread::_State::~_State()@GLIBCXX_3.4.22'
/home/leduy99/project/play_with_tflite/InferenceHelper/third_party/cmakes/../tflite_prebuilt/ubuntu/libtensorflowlite.so: undefined reference to `powf@GLIBC_2.27'
/home/leduy99/project/play_with_tflite/InferenceHelper/third_party/cmakes/../tflite_prebuilt/ubuntu/libtensorflowlite.so: undefined reference to `expf@GLIBC_2.27'
/home/leduy99/project/play_with_tflite/InferenceHelper/third_party/cmakes/../tflite_prebuilt/ubuntu/libtensorflowlite.so: undefined reference to `logf@GLIBC_2.27'
collect2: error: ld returned 1 exit status
CMakeFiles/main.dir/build.make:149: recipe for target 'main' failed
make[2]: *** [main] Error 1
CMakeFiles/Makefile2:69: recipe for target 'CMakeFiles/main.dir/all' failed
make[1]: *** [CMakeFiles/main.dir/all] Error 2
Makefile:83: recipe for target 'all' failed
make: *** [all] Error 2

Anyone know how to fix this? Tks :D

mac cmake error

Issue report
issue, bug, question

Environment (Hardware)

  • Hardware: mac
  • Software: macOS 11.5.2, cmake version 3.18.4
    (Please include version information)

Project Name

pj_tflite_face_blazeface

Error Log

CMake Error at /Users/gqs/Downloads/play_with_tflite/InferenceHelper/third_party/cmakes/tflite.cmake:35 (file):
  file COPY cannot find
  "/Users/gqs/Downloads/play_with_tflite/InferenceHelper/third_party/cmakes/../tflite_prebuilt/ubuntu/libtensorflowlite.so":
  No such file or directory.
Call Stack (most recent call first):
  /Users/gqs/Downloads/play_with_tflite/InferenceHelper/inference_helper/CMakeLists.txt:101 (include)

Initialization Error on Linux

Hi good Sir,
I'm pretty new to this.
I followed instruction on installation and successfully compiled.
I tried I built and tried pj_tflite_track_deepsort_person-reidentification for both png and mp4 return error as follows.

(py3.8-env) mft@538ddba63bce:/workspace/play_with_tflite/pj_tflite_track_deepsort_person-reidentification/build# ./main /workspace/douga/test.PNG
[InferenceHelper][78] Use TensorflowLite XNNPACK Delegate
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
[InferenceHelperTensorflowLite][370] Input num = 1
[InferenceHelperTensorflowLite][373] tensor[0]->name: images
[InferenceHelperTensorflowLite][375] tensor[0]->dims->size[0]: 1
[InferenceHelperTensorflowLite][375] tensor[0]->dims->size[1]: 480
[InferenceHelperTensorflowLite][375] tensor[0]->dims->size[2]: 640
[InferenceHelperTensorflowLite][375] tensor[0]->dims->size[3]: 3
[InferenceHelperTensorflowLite][381] tensor[0]->type: not quantized
[InferenceHelperTensorflowLite][387] Output num = 1
[InferenceHelperTensorflowLite][390] tensor[0]->name: Identity
[InferenceHelperTensorflowLite][392] tensor[0]->dims->size[0]: 1
[InferenceHelperTensorflowLite][392] tensor[0]->dims->size[1]: 6300
[InferenceHelperTensorflowLite][392] tensor[0]->dims->size[2]: 85
[InferenceHelperTensorflowLite][398] tensor[0]->type: not quantized
[InferenceHelper][78] Use TensorflowLite XNNPACK Delegate
[InferenceHelperTensorflowLite][370] Input num = 1
[InferenceHelperTensorflowLite][373] tensor[0]->name: input_1
[InferenceHelperTensorflowLite][375] tensor[0]->dims->size[0]: 1
[InferenceHelperTensorflowLite][375] tensor[0]->dims->size[1]: 128
[InferenceHelperTensorflowLite][375] tensor[0]->dims->size[2]: 64
[InferenceHelperTensorflowLite][375] tensor[0]->dims->size[3]: 3
[InferenceHelperTensorflowLite][381] tensor[0]->type: not quantized
[InferenceHelperTensorflowLite][387] Output num = 1
[InferenceHelperTensorflowLite][390] tensor[0]->name: Identity
[InferenceHelperTensorflowLite][392] tensor[0]->dims->size[0]: 1
[InferenceHelperTensorflowLite][392] tensor[0]->dims->size[1]: 512
[InferenceHelperTensorflowLite][398] tensor[0]->type: not quantized
[ERR: InferenceHelperTensorflowLite][463] Invalid name (inputs)
[ERR: InferenceHelperTensorflowLite][181] Invalid input tensor info (inputs)
[ERR: FeatureEngine][101] Inference helper is not created
Initialization Error

Anything I am missing to get this to work?

Thank you.

Compile GPU windows

Hello,

I am sorry if this is not place to write. I just wanted to ask you if you know if there is possibility to run solution with GPU on windows platform TENSORFLOW_LITE_GPU. I tried with your release but with no success..

Thank you very much,
Josip

Error on pj_tflite_depth_midas sample

Issue report
Bug

Environment (Hardware)

  • Intel I7, nVidia 1080 GTX
  • Windows 10 ,Visual Stduio 2019 x64

Project Name

pj_tflite_depth_midas

Issue Details

It compiles just fine, generate the executable,
But when I run it, it throws me :

[ERR: InferenceHelper][201] Unsupported inference helper type (3)
[ERR: InferenceHelper][205] Failed to create inference helper
[ERR: DepthEngine][128] Inference helper is not created
Initialization Error

How to Reproduce

Just try to run it.

Error Log

[ERR: InferenceHelper][201] Unsupported inference helper type (3)
[ERR: InferenceHelper][205] Failed to create inference helper
[ERR: DepthEngine][128] Inference helper is not created
Initialization Error

Thank you for the help.

Mobilenet SSD V1 Get Different Results on Same Image

Issue report

issue

Environment (Hardware)

  • Hardware: Device, CPU, GPU, etc.
  • Software: OS, Compiler, etc.
    (Please include version information)

Android 9, call the inference model in JNI and using same version of TF lite as mentioned in this repo.

Project Name

pj_tflite_det_mobilenetssd_v1

Issue Details

I tried to run the detection pj_tflite_det_mobilenetssd_v1 on Android.
For the same image, I got different detection in multiple inferences.

For example, the first time I got this result:

image

And for the second time, it may get same or a little different result such as:

image

I logged the input data and confirmed the input data is same.
I also tried different options, CPU, GPU, NNAPI etc. and get similar results.
On Desktop PC, it looks good, and I only found this issue on Android phone.

How to Reproduce

Steps to reproduce the behavior. Please include your cmake command.

Error Log

error log

Additional Information

Add any other context about the problem here.

Inference error with MNN model on pj_tflite_face_dbface.

Hi, I got the following errors when execute main program with MNN model on pj_tflite_face_dbface.
TFLITE model was works correctly.

If It's not use MNN model, please let me know. It's very helpful for me.
I read your face_detection_engine.cpp code when I found MNN line. So I just try MNN model converted by tflite. (I used MNNConvert command of MNN project.)

./MNNConvert -f TFLITE --modelFile dbface_mbnv2_480x640.tflite --MNNModel dbface_mbnv2_480x640.mnn --bizCode biz

Environment

  • Ubuntu 20.04

Errors

root@e02f6a807410:/app/play_with_tflite/pj_tflite_face_dbface/build_mnn# ./main a.jpg
[ERR: InferenceHelper][201] Unsupported inference helper type (3)
[ERR: InferenceHelper][205] Failed to create inference helper
[ERR: FaceDetectionEngine][128] Inference helper is not created
Initialization Error

My Try...

I changed CMakeLists.txt TFLite to MNN.

cmake_minimum_required(VERSION 3.0)

set(LibraryName "ImageProcessor")

# Create library
add_library (${LibraryName} image_processor.cpp image_processor.h face_detection_engine.cpp face_detection_engine.h)

# For OpenCV
find_package(OpenCV REQUIRED)
target_include_directories(${LibraryName} PUBLIC ${OpenCV_INCLUDE_DIRS})
target_link_libraries(${LibraryName} ${OpenCV_LIBS})

# Link Common Helper module
add_subdirectory(${CMAKE_CURRENT_LIST_DIR}/../../common_helper common_helper)
target_include_directories(${LibraryName} PUBLIC ${CMAKE_CURRENT_LIST_DIR}/../../common_helper)
target_link_libraries(${LibraryName} CommonHelper)

# For InferenceHelper
set(INFERENCE_HELPER_DIR ${CMAKE_CURRENT_LIST_DIR}/../../InferenceHelper/)
#set(INFERENCE_HELPER_ENABLE_TFLITE ON CACHE BOOL "TFLITE")
#set(INFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_XNNPACK ON CACHE BOOL "TFLITE_XNNPACK")
set(INFERENCE_HELPER_ENABLE_MNN ON CACHE BOOL "MNN")
add_subdirectory(${INFERENCE_HELPER_DIR}/inference_helper inference_helper)
target_include_directories(${LibraryName} PUBLIC ${INFERENCE_HELPER_DIR}/inference_helper)
target_link_libraries(${LibraryName} InferenceHelper)

I tried change your code, #if definition 1 to 0 on face_detection_engine.cpp, but it's not resolved.
I also installed vulkan follow your inference helper page.

Speed is slow

Issue report
issue, bug, question:-

I ran it on my linux server but it quite slow. i ran 10 frames for a single image and i got 800ms for img processing.
Check below details :-
image

Environment (Hardware)

  • Hardware: Device, CPU, GPU, etc.
  • Software: OS, Compiler, etc.
    (Please include version information)

Project Name

pj_xxx

Issue Details

A clear and concise description of what the issue is.

How to Reproduce

Steps to reproduce the behavior. Please include your cmake command.

Error Log

error log

Additional Information

Add any other context about the problem here.
image

How to run Yolov5 on Qualcomm RB5 device with GPU?

Environment (Hardware)

  • Hardware: Qualcomm RB5 with CPU/GPU/DSP support
  • Software: Ubuntu 18.04.

Project Name

pj_tflite_det_yolov5

Issue Details

Using information from the README.md file, I can successfully build and run model Yolov5 on a Qualcomm RB5 device. I understood that, by defauly, the model runs on CPU.
Now, I want to run it on GPU. So, I modified inference_helper as follows:
inference_helper_.reset(InferenceHelper::Create(InferenceHelper::kTensorflowLiteGpu));
Then run CMake command, but there is an error.

How to Reproduce

  1. Clone project into RB5 device: $ git clone https://github.com/iwatake2222/play_with_tflite.git
  2. $ cd play_with_tflite/pj_tflite_det_yolov5
  3. Open file: vi image_processor/detection_engine.cpp
  4. Comment line 84: inference_helper_.reset(InferenceHelper::Create(InferenceHelper::kTensorflowLiteXnnpack));
  5. Uncomment line 85: inference_helper_.reset(InferenceHelper::Create(InferenceHelper::kTensorflowLiteGpu));
  6. $ mkdir build && cd build
  7. $ cmake .. -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_EDGETPU=off -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_GPU=on -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_XNNPACK=off
    [main] CMAKE_SYSTEM_PROCESSOR = aarch64, BUILD_SYSTEM = aarch64
  8. Erorr is displayed.

Error Log

image

Additional Information

Reference link to RB5 device:
https://developer.qualcomm.com/qualcomm-robotics-rb5-kit/hardware-reference-guide

Please help me fix this problem!
Thank you very much!

Stuck on bazel build tensorflow lite 2.6

Issue report
question

Environment (Hardware)

  • Hardware: Raspberry Pi 4, 4 GB RAM
  • Software: OS: Ubuntu Server 20.04.3 LTS
  • uname -a:
    Linux ubuntu 5.4.0-1050-raspi #56-Ubuntu SMP PREEMPT Thu Jan 13 13:09:35 UTC 2022 aarch64 aarch64 aarch64 GNU/Linux
  • bazel --version:
    bazel 3.7.2- (@non-git)

Issue Details

I want to build tflite on Raspberry Pi 4 by this page
.
And I stuck on the error that showing below:

How to Reproduce

a error that when I use the command:

-c opt \
--config elinux_aarch64 \
--define tensorflow_mkldnn_contraction_kernel=0 \
--copt -O3 \
--strip always \
--define tflite_with_xnnpack=true

Terminal & Error Log

image

Additional Information

What problem is this? How to fix it?

Segmentation fault on my own SSDV1 tflite model when running

Hello,

Fistly, thank you for your clear instruction for running TFLite Model for object detection in C++. I met a problem when replacing your model by my own model that I trainned following the instruction of Tensflow Object Detection and exported it in .tflite, the code had an error: Segmentation fault and finished the execution. Do you have any idea for that?
Thank you in advance.
Toan.

My errors:

pi@raspberrypi:/play_with_tflite/pj_tflite_det_mobilenetssd_v1/build $ ./main
[InferenceHelper][52] Use TensorflowLite
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
[InferenceHelperTensorflowLite][345] Input num = 1
[InferenceHelperTensorflowLite][348] tensor[0]->name: normalized_input_image_tensor
[InferenceHelperTensorflowLite][350] tensor[0]->dims->size[0]: 1
[InferenceHelperTensorflowLite][350] tensor[0]->dims->size[1]: 300
[InferenceHelperTensorflowLite][350] tensor[0]->dims->size[2]: 300
[InferenceHelperTensorflowLite][350] tensor[0]->dims->size[3]: 3
[InferenceHelperTensorflowLite][356] tensor[0]->type: not quantized
[InferenceHelperTensorflowLite][362] Output num = 4
[InferenceHelperTensorflowLite][365] tensor[0]->name: TFLite_Detection_PostProcess
[InferenceHelperTensorflowLite][367] tensor[0]->dims->size[0]: 1
[InferenceHelperTensorflowLite][367] tensor[0]->dims->size[1]: 10
[InferenceHelperTensorflowLite][367] tensor[0]->dims->size[2]: 4
[InferenceHelperTensorflowLite][373] tensor[0]->type: not quantized
[InferenceHelperTensorflowLite][365] tensor[1]->name: TFLite_Detection_PostProcess:1
[InferenceHelperTensorflowLite][367] tensor[1]->dims->size[0]: 1
[InferenceHelperTensorflowLite][367] tensor[1]->dims->size[1]: 10
[InferenceHelperTensorflowLite][373] tensor[1]->type: not quantized
[InferenceHelperTensorflowLite][365] tensor[2]->name: TFLite_Detection_PostProcess:2
[InferenceHelperTensorflowLite][367] tensor[2]->dims->size[0]: 1
[InferenceHelperTensorflowLite][367] tensor[2]->dims->size[1]: 10
[InferenceHelperTensorflowLite][373] tensor[2]->type: not quantized
[InferenceHelperTensorflowLite][365] tensor[3]->name: TFLite_Detection_PostProcess:3
[InferenceHelperTensorflowLite][367] tensor[3]->dims->size[0]: 1
[InferenceHelperTensorflowLite][373] tensor[3]->type: not quantized
Segmentation fault
pi@raspberrypi:
/play_with_tflite/pj_tflite_det_mobilenetssd_v1/build $

MoveNet is running very slow on video. FPS- 10 to 12

Issue report
issue, bug, question

Environment (Hardware)

  • Hardware: CPU -AMD Ryzen 9 5950X 16-Core Processor 3.40 GHz
  • Software: Windows 10, MS visual studio
    (Please include version information)

Project Name

pj_tflite_pose_movenet_multi

Issue Details

I followed the build and compile steps for movenet on TensorflowLite. It is working but when I am passing the video as an input argument I am getting an fps of 10-12 only. I used the lightning model- movenet_multipose_256x256_lightning.tflite

How to Reproduce

Perform the setup exactly as mentioned in the readme. I did a cmake using the gui and did not change the defaults

Error Log

As in the below image, the fps is just 11.9. I have attached the original video for your reference as well. Downloaded from - https://www.pexels.com/video/three-women-posing-close-to-each-other-4723077/

test

production.ID_4723077.mp4

Additional Information

Add any other context about the problem here.

How to build 140_Ultra-Fast-Lane-Detection in Android.

Issue report
issue, bug, question

Environment (Hardware)

  • Hardware: 2020 m1 mac book pro, Galaxy S21(Android)
  • Software: 11(Android) , osX 12

Project Name

pj_140_Ultra-Fast-Lane-Detection

Issue Details

(English)

  1. I followed all your instructions. (Android build-https://github.com/iwatake2222/play_with_tflite/blob/master/README.md) However, nothing responded to my cell phone. The application was built normally, but it does not respond even when a button in the application is pressed.
  2. I want to build the 140-Ultra Fast Lane Detection you uploaded in an Android environment, but I'm not sure because I'm an Android beginner. I would appreciate it if you could answer or send me an email. Please go to [email protected].

(Japanese)
私は君の指示に全部従ったんだ。(Android build-https://github.com/iwatake2222/play_with_tflite/blob/master/README.md) でも、私の携帯電話では何も反応しなかった。) アプリケーションは正常にビルドになったが、アプリケーション中のボタンを押しても反応しない。
2. あなたがアップロードした140-Ultra Fast LaneDetectionをアンドロイド環境でビルドしたいが、アンドロイド初心者なのでよく分からない。 答えてくれたりEメールを送ってくれたりしたらとてもありがたいと思う。 [email protected]にお願いします

fatal error: tensorflow/lite/interpreter.h: No such file or directory

Issue report
question

Environment (Hardware)

  • Hardware: Raspberry Pi 4, 4 GB RAM
  • Software: OS: Ubuntu Server 20.04.3 LTS
  • uname -a:
    Linux ubuntu 5.4.0-1052-raspi #58-Ubuntu SMP PREEMPT Mon Feb 7 16:52:35 UTC 2022 aarch64 aarch64 aarch64 GNU/Linux

Project Name

pj_tflite_lane_lanenet-lane-detection

Issue Details

An error occurs when build:
image

And i already put libtensorflowlite.so in /InferenceHelper/third_party/tensorflow_prebuilt/aarch64/

and tensorflow source path in /home/ubuntu/tensorflow

How to Reproduce

In /pj_tflite_lane_lanenet-lane-detection/build directory, run

cmake .

and run

make

Error Log

[  7%] Building CXX object image_processor/inference_helper/CMakeFiles/InferenceHelper.dir/inference_helper.cpp.o
In file included from /home/ubuntu/Codes/Project_with_tflite/InferenceHelper/inference_helper/inference_helper.cpp:35:
/home/ubuntu/Codes/Project_with_tflite/InferenceHelper/inference_helper/inference_helper_tensorflow_lite.h:27:10: fatal error: tensorflow/lite/interpreter.h: No such file or directory
   27 | #include <tensorflow/lite/interpreter.h>
      |          ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
make[2]: *** [image_processor/inference_helper/CMakeFiles/InferenceHelper.dir/build.make:63: image_processor/inference_helper/CMakeFiles/InferenceHelper.dir/inference_helper.cpp.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:221: image_processor/inference_helper/CMakeFiles/InferenceHelper.dir/all] Error 2

Additional Information

What problem is this? How to fix it?
And thank you for the repository.

how to debug in vs2019

Issue report
issue

Environment (Hardware)

  • Hardware: CPU
  • Software: windows10 pro

Project Name

pj_tflite_track_deepsort

Issue Details

void InferenceHelperTensorflowLite::DisplayModelInfo(const tflite::Interpreter& interpreter)
{
    /* Memo: If you get error around here in Visual Studio, please make sure you don't use Debug */

thanks for your sharing. you did great work.
I want to learn your code. but I can not debug in vs2019.
debug is the best way to learn code for me.
I found your comment
about here, do you know how to fix this issue,

I try add
#define _ITERATOR_DEBUG_LEVEL 0
but this not work

How to Reproduce

debug in vs2019

Question: who did you traind coco_ssd_mobilenet_v1 so the model runs fast?

Issue report
question

In the models folder there is a model named "coco_ssd_mobilenet_v1_1.0_quant_2018_06_29.tflite". It works so fast around 20ms on my laptop and I want to add 2 classes to the model I have the datasets and I followed the tutorial on TFOD2 with a pre-trained model "SSD MobileNet v2 320x320" from TF2 model zoo then I quantized my model using representative_dataset but my model was slow around 1250ms. I'm wondering if you did anything that makes your model run fast

Wrong keypoints position on MoveNet Multipose demo using kTensorflowLiteGpu

I'm using a ubuntu server with two 1080 GPU and 4.5.3 OpenCV Version with CUDA 11.

Using the default configuration of XNNPack everything works flawlessly.

With pj_tflite_pose_movenet_multi example

When I try to use the GPU example built with the proper cmake configuration

inference_helper_.reset(InferenceHelper::Create(InferenceHelper::kTensorflowLiteGpu));

And using the standard model model_float32.tflite I've found that the output seems broken.
The pose keypoints are out of place but the blue boxes seems working perfect.

Also, Is there any tutorial on how to use the TensorRT Models?
Thanks,

Performance Degradation with YOLOv5 on CPU – Need Assistance in Improving FPS

Dear play_with_tflite Contributors,

First and foremost, I would like to extend my sincere gratitude for your hard work in developing this library. It has been immensely beneficial to my learning journey, and I appreciate the clarity and quality of the code.

I am currently facing a performance issue while running YOLOv5lite on a CPU. I've made some modifications to main.cpp to enable camera support. Initially, the software runs as expected with a frame rate of approximately 10-12 FPS. However, after the first 10 frames, I observe a significant drop in performance, with FPS values plummeting to around 2-3 FPS.

This performance bottleneck is a critical issue for my project, as I aim to achieve near real-time processing speeds, ideally around 25 FPS, on a CPU-only setup. I understand that running advanced models like YOLOv5 exclusively on a CPU can be challenging, but I am seeking advice or potential optimizations that could help in improving the frame rate.

Screenshot from 2024-01-03 16-26-52
Screenshot from 2024-01-03 16-31-18

about coral::RunInference() memcpy error in model_utils.cc

Dear Sir:

I use your project as a base to build my own project on EdgeTPU Dev Board.
The error I got is the coral::RunInference() memcpy error in model_utils.cc.
after using gdb to check the error, I got the input's address is 0x0.

could you do me a favor to give me some hint to solve this error?

Thank you.

Best Regards,
Akio

pj_tflite_hand_mediapipe cannot inference

Issue report
issue, bug, question

Environment (Hardware)

  • Hardware: android phone(ROG 5s)
  • Software: android

Project Name

pj_tflite_hand_mediapipe

Issue Details

in pj_tflite_hand_mediapipe directory, I can't find the instruction about how to use, so I download the models from mediapipe hand website(https://google.github.io/mediapipe/solutions/models.html#hands), And also i changed the code cause the model input name and tensor_dims are not same。Then I run the demo, but cannot inference cause in UI, inference time is 0ms。So I want to know , what's your medipipe hand model? Really appreciated!!!

Segmentation fault in rpi

Issue report
issue, bug, question

Environment (Hardware)

  • raspberry pi 4b ,rasbian
  • Software: OS, Compiler, etc.
    (Please include version information)

Project Name

pj_ with all project

Issue Details

working fine in laptop but failed to run in rpi.

How to Reproduce

Steps to reproduce the behavior. Please include your cmake command.

Error Log

[InferenceHelper][81] Use TensorflowLite XNNPACK Delegate
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
[InferenceHelperTensorflowLite][307] Input num = 1
[InferenceHelperTensorflowLite][310]     tensor[0]->name: input:0
[InferenceHelperTensorflowLite][312]     tensor[0]->dims->size[0]: 1
[InferenceHelperTensorflowLite][312]     tensor[0]->dims->size[1]: 256
[InferenceHelperTensorflowLite][312]     tensor[0]->dims->size[2]: 256
[InferenceHelperTensorflowLite][312]     tensor[0]->dims->size[3]: 3
[InferenceHelperTensorflowLite][318]     tensor[0]->type: not quantized
[InferenceHelperTensorflowLite][324] Output num = 1
[InferenceHelperTensorflowLite][327]     tensor[0]->name: Identity:0
[InferenceHelperTensorflowLite][329]     tensor[0]->dims->size[0]: 1
[InferenceHelperTensorflowLite][329]     tensor[0]->dims->size[1]: 6
[InferenceHelperTensorflowLite][329]     tensor[0]->dims->size[2]: 56
[InferenceHelperTensorflowLite][335]     tensor[0]->type: not quantized
Segmentation fault

Additional Information

i also try to increase the swap memory upto 8 gb but no use
Screenshot from 2021-12-06 01-04-43

question

how to add debug information in the makefile for "pj_tflite_track_deepsort" work?

Thx

An external symbol that cannot be resolved:"public: enum TfLiteStatus __cdecl tflite::impl::Interpreter::SetNumThreads(int)" (?SetNumThreads@Interpreter@impl@tflite@@QEAA?AW4TfLiteStatus@@H@Z)

Hi iwatake2222:
thanks for your great works. Based on your tutorial configuration in the WIN10 environment, I had the following problems during the build the project:

LNK2019	An external symbol that cannot be resolved "public: enum TfLiteStatus __cdecl tflite::impl::Interpreter::SetNumThreads(int)" (?SetNumThreads@Interpreter@impl@tflite@@QEAA?AW4TfLiteStatus@@H@Z), function "public: virtual int __cdecl InferenceHelperTensorflowLite::initialize(char const *,int)" (?initialize@InferenceHelperTensorflowLite@@UEAAHPEBDH@Z) Refer to the symbol in 	main	E:\01.work\edge_tpu_c++\play_with_tflite_0811\play_with_tflite-20200811\build\pj_tflite_cls_mobilenet_v2\InferenceHelper.lib(InferenceHelperTensorflowLite.obj).

I think maybe it is because of libtensorflowlite.so.if.lib what should I do?
And the in your shared file edgetpu_prebuild not have the x64_windows file.Can you share it with us?
If possible,Could you send me the complete project files? I mean the files include all file in third_parry. Because I want to use the same version tensorflow and edgetpu sound code like yours.

Looking forward to your reply. thank you very much.

读取访问权限冲突 **std::_Vector_alloc<std::_Vec_base_types<int,std::allocator<int> > >::_Myfirst**(...) 返回 0xFFFFFFFFFFFFFFFF。

Issue report
issue, bug, question

Hello, thank you very much for your work. This work is very good, but when I build the project according to your instructions, the following error occurred. My platform is windows10 64-bit. , The tool used is visual studio2017

Environment (Hardware)

  • Hardware: Device, CPU, GPU, etc.
  • Software: OS, Compiler, etc.

window10 CPU
visual studio 2017 MSVC x64

(Please include version information)

Project Name

pj_xxx

pj_tflite_face_dbface and pj_tflite_face_facemesh

Issue Details

A clear and concise description of what the issue is.

image

How to Reproduce

Steps to reproduce the behavior. Please include your cmake command.

Error Log

error log

“main.exe”(Win32): 已加载“E:\6.tmp\tflite_pro\pj_tflite_face_dbface\build_vs2017_x64\Debug\main.exe”。已加载符号。
“main.exe”(Win32): 已加载“C:\Windows\System32\ntdll.dll”。已加载符号。
“main.exe”(Win32): 已加载“C:\Windows\System32\kernel32.dll”。已加载符号。
“main.exe”(Win32): 已加载“C:\Windows\System32\KernelBase.dll”。已加载符号。
“main.exe”(Win32): 已加载“C:\Windows\System32\tsafedoc64.dll”。无法查找或打开 PDB 文件。
“main.exe”(Win32): 已加载“C:\Windows\System32\ws2_32.dll”。已加载符号。
“main.exe”(Win32): 已加载“C:\Windows\System32\rpcrt4.dll”。已加载符号。
“main.exe”(Win32): 已加载“C:\Windows\System32\user32.dll”。已加载符号。
“main.exe”(Win32): 已加载“C:\Windows\System32\win32u.dll”。已加载符号。
“main.exe”(Win32): 已加载“C:\Windows\System32\gdi32.dll”。已加载符号。
“main.exe”(Win32): 已加载“C:\Windows\System32\gdi32full.dll”。已加载符号。
“main.exe”(Win32): 已加载“C:\Windows\System32\msvcp_win.dll”。已加载符号。
“main.exe”(Win32): 已加载“C:\Windows\System32\ucrtbase.dll”。已加载符号。
“main.exe”(Win32): 已加载“C:\Windows\System32\advapi32.dll”。已加载符号。
“main.exe”(Win32): 已加载“C:\Windows\System32\msvcrt.dll”。已加载符号。
“main.exe”(Win32): 已加载“C:\Windows\System32\sechost.dll”。已加载符号。
“main.exe”(Win32): 已加载“C:\Windows\System32\shell32.dll”。已加载符号。
“main.exe”(Win32): 已加载“C:\Windows\System32\cfgmgr32.dll”。已加载符号。
“main.exe”(Win32): 已加载“C:\Windows\System32\SHCore.dll”。已加载符号。
“main.exe”(Win32): 已加载“C:\Windows\System32\combase.dll”。已加载符号。
“main.exe”(Win32): 已加载“C:\Windows\System32\bcryptprimitives.dll”。已加载符号。
“main.exe”(Win32): 已加载“C:\Windows\System32\windows.storage.dll”。已加载符号。
“main.exe”(Win32): 已加载“C:\Windows\System32\profapi.dll”。已加载符号。
“main.exe”(Win32): 已加载“C:\Windows\System32\powrprof.dll”。已加载符号。
“main.exe”(Win32): 已加载“C:\Windows\System32\umpdc.dll”。已加载符号。
“main.exe”(Win32): 已加载“C:\Windows\System32\shlwapi.dll”。已加载符号。
“main.exe”(Win32): 已加载“C:\Windows\System32\kernel.appcore.dll”。已加载符号。
“main.exe”(Win32): 已加载“C:\Windows\System32\cryptsp.dll”。已加载符号。
“main.exe”(Win32): 已加载“C:\Windows\System32\ole32.dll”。已加载符号。
“main.exe”(Win32): 已加载“C:\Windows\System32\oleaut32.dll”。已加载符号。
“main.exe”(Win32): 已加载“C:\Windows\System32\winhafnt64.dll”。无法查找或打开 PDB 文件。
“main.exe”(Win32): 已加载“C:\Windows\System32\winspool.drv”。已加载符号。
“main.exe”(Win32): 已加载“C:\Windows\System32\bcrypt.dll”。已加载符号。
“main.exe”(Win32): 已加载“C:\Windows\System32\mpr.dll”。已加载符号。
“main.exe”(Win32): 已加载“C:\Windows\System32\version.dll”。已加载符号。
“main.exe”(Win32): 已加载“C:\Windows\System32\opencv_world3413d.dll”。无法查找或打开 PDB 文件。
“main.exe”(Win32): 已加载“C:\Windows\System32\comdlg32.dll”。已加载符号。
“main.exe”(Win32): 已加载“E:\6.tmp\tflite_pro\pj_tflite_face_dbface\build_vs2017_x64\libtensorflowlite.so”。无法查找或打开 PDB 文件。
“main.exe”(Win32): 已加载“C:\Windows\System32\msvcp140d.dll”。已加载符号。
“main.exe”(Win32): 已加载“C:\Windows\System32\vcomp140d.dll”。已加载符号。
“main.exe”(Win32): 已加载“C:\Windows\System32\vcruntime140d.dll”。已加载符号。
“main.exe”(Win32): 已加载“C:\Windows\System32\ucrtbased.dll”。已加载符号。
“main.exe”(Win32): 已加载“C:\Windows\System32\propsys.dll”。已加载符号。
“main.exe”(Win32): 已加载“C:\Windows\System32\IPHLPAPI.DLL”。已加载符号。
“main.exe”(Win32): 已加载“C:\Windows\WinSxS\amd64_microsoft.windows.common-controls_6595b64144ccf1df_5.82.18362.900_none_2a238898466d6da2\comctl32.dll”。已加载符号。
“main.exe”(Win32): 已加载“C:\Windows\System32\concrt140d.dll”。已加载符号。
“main.exe”(Win32): 已加载“C:\Windows\System32\mfplat.dll”。已加载符号。
“main.exe”(Win32): 已加载“C:\Windows\System32\mf.dll”。无法查找或打开 PDB 文件。
“main.exe”(Win32): 已加载“C:\Windows\System32\mfreadwrite.dll”。已加载符号。
“main.exe”(Win32): 已加载“C:\Windows\System32\d3d11.dll”。已加载符号。
“main.exe”(Win32): 已加载“C:\Windows\System32\msvcp140.dll”。已加载符号。
“main.exe”(Win32): 已加载“C:\Windows\System32\vcruntime140.dll”。已加载符号。
“main.exe”(Win32): 已加载“C:\Windows\System32\vcruntime140_1.dll”。已加载符号。
“main.exe”(Win32): 已加载“C:\Windows\System32\vcruntime140_1d.dll”。已加载符号。
“main.exe”(Win32): 已加载“C:\Windows\System32\dxgi.dll”。已加载符号。
“main.exe”(Win32): 已加载“C:\Windows\System32\mfcore.dll”。已加载符号。
“main.exe”(Win32): 已加载“C:\Windows\System32\crypt32.dll”。已加载符号。
“main.exe”(Win32): 已加载“C:\Windows\System32\msasn1.dll”。已加载符号。
“main.exe”(Win32): 已加载“C:\Windows\System32\DXCore.dll”。已加载符号。
“main.exe”(Win32): 已加载“C:\Windows\System32\ksuser.dll”。已加载符号。
“main.exe”(Win32): 已加载“C:\Windows\System32\cryptbase.dll”。已加载符号。
“main.exe”(Win32): 已加载“C:\Windows\System32\imm32.dll”。已加载符号。
“main.exe”(Win32): 已加载“C:\Windows\System32\winhadnt64.dll”。无法查找或打开 PDB 文件。
“main.exe”(Win32): 已加载“C:\Windows\System32\dnsapi.dll”。已加载符号。
“main.exe”(Win32): 已加载“C:\Windows\System32\nsi.dll”。已加载符号。
“main.exe”(Win32): 已加载“C:\Windows\System32\dtframe64.dll”。无法查找或打开 PDB 文件。
“main.exe”(Win32): 已加载“C:\Windows\System32\TIjtDrvd64.dll”。无法查找或打开 PDB 文件。
“main.exe”(Win32): 已加载“C:\Windows\System32\PdrvFlt.dll”。无法查找或打开 PDB 文件。
“main.exe”(Win32): 已加载“C:\Windows\System32\userenv.dll”。已加载符号。
[15688] DLL_PROCESS_ATTACH [E:\6.tmp\tflite_pro\pj_tflite_face_dbface\build_vs2017_x64\Debug\main.exe]
HookLdrLoadDll : winspool.drv->>>>>>>PdrvFlt.dll , status=00000000
“main.exe”(Win32): 已加载“C:\Windows\System32\dtsframe64.dll”。无法查找或打开 PDB 文件。
“main.exe”(Win32): 已加载“C:\Windows\System32\mswsock.dll”。已加载符号。
HookLdrLoadDll : winspool.drv->>>>>>>PdrvFlt.dll , status=00000000
“main.exe”(Win32): 已加载“C:\Windows\System32\winusb.dll”。已加载符号。
“main.exe”(Win32): 已加载“C:\Windows\System32\psapi.dll”。已加载符号。
“main.exe”(Win32): 已加载“C:\Windows\System32\TMailHook64.dll”。无法查找或打开 PDB 文件。
“main.exe”(Win32): 已加载“C:\Windows\System32\imagehlp.dll”。已加载符号。
“main.exe”(Win32): 已加载“C:\Windows\System32\hlink.dll”。已加载符号。
“main.exe”(Win32): 已加载“C:\Windows\System32\RTWorkQ.dll”。已加载符号。
线程 0x3b1c 已退出,返回值为 0 (0x0)。
“main.exe”(Win32): 已加载“C:\Windows\System32\secur32.dll”。已加载符号。
“main.exe”(Win32): 已加载“C:\Windows\System32\sspicli.dll”。已加载符号。
引发了异常: 读取访问权限冲突。
std::_Vector_alloc<std::_Vec_base_types<int,std::allocator > >::_Myfirst(...) 返回 0xFFFFFFFFFFFFFFFF。

Additional Information

Add any other context about the problem here.

DroNet with OpenCV(cv::dnn + Darknet) in C++

Issue report
issue

Environment (Hardware)

  • Hardware:lenova thinkpad , CPU
  • Software: ubuntu 20.1,g++9
    (Please include version information)

Project Name

pj_DroNet with OpenCV(cv::dnn + Darknet) in C++

Issue Details

error in image_processor.cpp that ‘atanf’ is not a member of ‘std’; did you mean ‘atanh’?

How to Reproduce

cmake ..
make

Error Log

/home/abhijith/tflite/play_with_tflite/pj_tflite_det_dronet/image_processor/image_processor.cpp: In function ‘void AnalyzeFlow(cv::Mat&, std::vector<Track>&)’:
/home/abhijith/tflite/play_with_tflite/pj_tflite_det_dronet/image_processor/image_processor.cpp:157:20: error: ‘atanf’ is not a member of ‘std’; did you mean ‘atanh’?
  157 |             : std::atanf((bbox_now.y - bbox_past.y) / static_cast<float>(bbox_now.x - bbox_past.x));
      |                    ^~~~~
      |                    atanh
make[2]: *** [image_processor/CMakeFiles/ImageProcessor.dir/build.make:76: image_processor/CMakeFiles/ImageProcessor.dir/image_processor.cpp.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:164: image_processor/CMakeFiles/ImageProcessor.dir/all] Error 2
make: *** [Makefile:91: all] Error 2

Additional Information

also find same issue in dronet original repo

Modify interface of InferenceHelper

Add tensor information into initialize function, so that the same interface works with other inference libraries which require input/output tensor names at the initialization.

edgetpu header file not found

HI @iwatake2222 . Thanks for your awsome work.
I want to test different detection models on google corel edge TPU attached wit rasberry PI 4. First I download all the libraries by runing download_prebuilt_libraries.sh. ( This install many un necessory libraries for GPU and windows as well ) but any ways .
Then I build temp_pj_tflite_edgetpuapi_cls_mobilenet_v2 demo but it give error that no edgetpu.h found while making. THen i download edgetpu_runtime library from the link given here at the end. i run the install.sh, it runs succesfully and i copied theedgetpu header file to the demo and again tried to build but it again return me with some problems. Can you suggest what exactely the issue is going on here. Where is the correct edgetpu header file that we can use. ? THanks
Output is here:

$ cmake .. -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_EDGETPU=on  -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_GPU=off -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_XNNPACK=off
[main] CMAKE_SYSTEM_PROCESSOR = armv7l, BUILD_SYSTEM = armv7
-- Configuring done
-- Generating done
-- Build files have been written to: /home/ubuntu/play_with_tflite/temp_pj_tflite_edgetpuapi_cls_mobilenet_v2/build

After make i got this

ubuntu@UDENTIFY:~/play_with_tflite/temp_pj_tflite_edgetpuapi_cls_mobilenet_v2/build $ make
[ 33%] Building CXX object CMakeFiles/main.dir/Main.cpp.o
In file included from /home/ubuntu/play_with_tflite/temp_pj_tflite_edgetpuapi_cls_mobilenet_v2/Main.cpp:19:
/home/ubuntu/play_with_tflite/temp_pj_tflite_edgetpuapi_cls_mobilenet_v2/model_utils.h:19:5: error: ‘edgetpu’ has not been declared
     edgetpu::EdgeTpuContext* edgetpu_context);
     ^~~~~~~
/home/ubuntu/play_with_tflite/temp_pj_tflite_edgetpuapi_cls_mobilenet_v2/model_utils.h:19:28: error: expected ‘,’ or ‘...’ before ‘*’ token
     edgetpu::EdgeTpuContext* edgetpu_context);
                            ^
/home/ubuntu/play_with_tflite/temp_pj_tflite_edgetpuapi_cls_mobilenet_v2/Main.cpp: In function ‘int main()’:
/home/ubuntu/play_with_tflite/temp_pj_tflite_edgetpuapi_cls_mobilenet_v2/Main.cpp:55:18: error: ‘edgetpu’ was not declared in this scope
  std::shared_ptr<edgetpu::EdgeTpuContext> edgetpu_context = edgetpu::EdgeTpuManager::GetSingleton()->OpenDevice();
                  ^~~~~~~
/home/ubuntu/play_with_tflite/temp_pj_tflite_edgetpuapi_cls_mobilenet_v2/Main.cpp:55:18: note: suggested alternative: ‘getpt’
  std::shared_ptr<edgetpu::EdgeTpuContext> edgetpu_context = edgetpu::EdgeTpuManager::GetSingleton()->OpenDevice();
                  ^~~~~~~
                  getpt
/home/ubuntu/play_with_tflite/temp_pj_tflite_edgetpuapi_cls_mobilenet_v2/Main.cpp:55:41: error: template argument 1 is invalid
  std::shared_ptr<edgetpu::EdgeTpuContext> edgetpu_context = edgetpu::EdgeTpuManager::GetSingleton()->OpenDevice();
                                         ^
/home/ubuntu/play_with_tflite/temp_pj_tflite_edgetpuapi_cls_mobilenet_v2/Main.cpp:55:61: error: ‘edgetpu’ is not a class, namespace, or enumeration
  std::shared_ptr<edgetpu::EdgeTpuContext> edgetpu_context = edgetpu::EdgeTpuManager::GetSingleton()->OpenDevice();
                                                             ^~~~~~~
/home/ubuntu/play_with_tflite/temp_pj_tflite_edgetpuapi_cls_mobilenet_v2/Main.cpp:56:108: error: request for member ‘get’ in ‘edgetpu_context’, which is of non-class type ‘int’
  std::unique_ptr<tflite::Interpreter> interpreter = coral::BuildEdgeTpuInterpreter(*model, edgetpu_context.get());
                                                                                                            ^~~
make[2]: *** [CMakeFiles/main.dir/build.make:63: CMakeFiles/main.dir/Main.cpp.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:76: CMakeFiles/main.dir/all] Error 2
make: *** [Makefile:84: all] Error 2

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.